A check on how this website works in the backend
aman gupta
Author

dlt is an open-source Python library that loads data from various, often messy data sources into well-structured datasets. It provides lightweight Python interfaces to extract, load, inspect and transform the data. dlt and the dlt docs are built ground up to be used with LLMs: LLM-native workflow will take you pipeline code to data in a notebook for over 5,000 sources.
dlt is designed to be easy to use, flexible, and scalable:
To get started with dlt, install the library using pip (use clean virtual environment for your experiments!):
pip install dlt
:::tip If you'd like to try out dlt without installing it on your machine, check out the Google Colab demo or use our simple marimo / wasm based playground on this docs page. :::
<Tabs groupId="source-type" defaultValue="rest-api" values={[ {"label": "REST APIs", "value": "rest-api"}, {"label": "SQL databases", "value": "sql-database"}, {"label": "Cloud storages or files", "value": "filesystem"}, {"label": "Python data structures", "value": "python-data"}, ]}> <TabItem value="rest-api">
Use dlt's REST API source to extract data from any REST API. Define the API endpoints you'd like to fetch data from, the pagination method, and authentication, and dlt will handle the rest:
import dlt
from dlt.sources.rest_api import rest_api_source
source = rest_api_source({
"client": {
"base_url": "https://api.example.com/",
"auth": {
"token": dlt.secrets["your_api_token"],
},
"paginator": {
"type": "json_link",
"next_url_path": "paging.next",
},
},
"resources": ["posts", "comments"],
})
pipeline = dlt.pipeline(
pipeline_name="rest_api_example",
destination="duckdb",
dataset_name="rest_api_data",
)
load_info = pipeline.run(source)
# print load info and posts table as data frame
print(load_info)
print(pipeline.dataset().posts.df())
:::tip LLMs are great at generating REST API pipelines!
Follow LLM tutorial and start with one of 5,000+ sources
Follow the REST API source tutorial to learn more about the source configuration and pagination methods. :::
</TabItem> <TabItem value="sql-database">Use the SQL source to extract data from databases like PostgreSQL, MySQL, SQLite, Oracle, and more.
from dlt.sources.sql_database import sql_database
source = sql_database(
"mysql+pymysql://rfamro@mysql-rfam-public.ebi.ac.uk:4497/Rfam"
)
pipeline = dlt.pipeline(
pipeline_name="sql_database_example",
destination="duckdb",
dataset_name="sql_data",
)
load_info = pipeline.run(source)
# print load info and the "family" table as data frame
print(load_info)
print(pipeline.dataset().family.df())
Follow the SQL source tutorial to learn more about the source configuration and supported databases.
</TabItem> <TabItem value="filesystem">The Filesystem source extracts data from AWS S3, Google Cloud Storage, Google Drive, Azure, or a local file system.
from dlt.sources.filesystem import filesystem
resource = filesystem(
bucket_url="s3://example-bucket",
file_glob="*.csv"
)
pipeline = dlt.pipeline(
pipeline_name="filesystem_example",
destination="duckdb",
dataset_name="filesystem_data",
)
load_info = pipeline.run(resource)
# print load info and the "example" table as data frame
print(load_info)
print(pipeline.dataset().example.df())
Follow the filesystem source tutorial to learn more about the source configuration and supported storage services.
</TabItem> <TabItem value="python-data">dlt can load data from Python generators or directly from Python data structures:
import dlt
@dlt.resource(table_name="foo_data")
def foo():
for i in range(10):
yield {"id": i, "name": f"This is item {i}"}
pipeline = dlt.pipeline(
pipeline_name="python_data_example",
destination="duckdb",
)
load_info = pipeline.run(foo)
# print load info and the "foo_data" table as data frame
print(load_info)
print(pipeline.dataset().foo_data.df())
Check out the Python data structures tutorial to learn about dlt fundamentals and advanced usage scenarios.
</TabItem> </Tabs>