Dagster, Airflow, and Prefect all have dbt integrations. The depth varies across them. The distinction: when a dbt model fails, does the orchestrator see a failed task, or does it see a stale table with downstream impact on specific freshness SLAs?
dagster-dbt: First-Class Asset Mapping
The dagster-dbt package reads your project’s manifest.json and creates one Dagster asset per dbt model, seed, and snapshot. Dependencies from ref() and source() calls become edges in the asset graph. dbt tests become Dagster asset checks automatically. Owners, tags, and column metadata carry over from your schema.yml configuration.
FreeAgent’s 30-criteria evaluation put it well: “A few lines of code and you can have a complete asset map of your dbt models.”
from dagster_dbt import DbtCliResource, DbtProject, dbt_assetsfrom pathlib import Path
my_project = DbtProject(project_dir=Path("./transform"))my_project.prepare_if_dev()
@dbt_assets(manifest=my_project.manifest_path)def my_dbt_assets(context, dbt: DbtCliResource): yield from dbt.cli(["build"], context=context).stream()This minimal setup gives you the full asset catalog. Every dbt model appears in the Dagster UI with upstream and downstream dependencies visualized, materialization history tracked, and freshness monitored. The asset mapping note covers the details: DagsterDbtTranslator customization, manifest handling, filtering, and the two project setup paths.
The practical difference shows in failure scenarios. When mrt__finance__monthly_revenue fails in Dagster, you see:
- Which upstream models it depends on and whether they’re fresh
- Which downstream assets are now stale because of this failure
- Which freshness SLAs are at risk
- Whether the failure is in the model SQL or in a data quality check
- The full lineage context, including non-dbt assets (Python ingestion, ML models, BI refresh)
Half of all Dagster Cloud users run dbt — the highest dbt adoption rate of any orchestrator.
astronomer-cosmos: Airflow Task Wrapping
astronomer-cosmos (21M+ monthly downloads, 140+ contributors) transforms dbt projects into Airflow DAGs or TaskGroups. Each dbt model becomes an individual Airflow task with retries and error notifications. Setup is similarly concise:
from cosmos import DbtDag, ProjectConfig, ProfileConfig, ExecutionConfig
my_dbt_dag = DbtDag( project_config=ProjectConfig("/path/to/dbt/project"), profile_config=ProfileConfig( profile_name="my_profile", target_name="prod", ), execution_config=ExecutionConfig( dbt_executable_path="/usr/local/bin/dbt", ), schedule_interval="@daily", start_date=datetime(2025, 1, 1), dag_id="dbt_dag",)Cosmos does a good job of bringing dbt into Airflow’s world. About 10 lines of code and you have dbt models running as individual Airflow tasks with dependency ordering preserved. It now supports Airflow 3’s @asset decorator too.
But it wraps dbt into Airflow’s task paradigm rather than treating models as first-class data objects. When a model fails, you see “task failed” in the DAG view. You know which model broke, but you don’t automatically see:
- Data freshness implications for downstream consumers
- Cross-system lineage impact (does this affect a Python asset? a Fivetran sync?)
- Asset-level health history over time
You see “task succeeded” or “task failed,” not “this table is fresh” or “this table is stale.” The distinction is the same architectural difference described in orchestrator philosophies: Airflow tracks process execution, Dagster tracks data product state.
For teams already running Airflow for non-dbt workloads, cosmos is the natural integration path — it brings dbt into an existing operational model without requiring a new platform.
prefect-dbt: Operational Execution
prefect-dbt provides DbtCoreOperation for running dbt CLI commands and DbtCloudJob for triggering dbt Cloud runs. It generates Markdown artifacts in the Prefect UI with links to dbt artifacts.
from prefect import flowfrom prefect_dbt.cli.commands import DbtCoreOperation
@flowdef run_dbt(): result = DbtCoreOperation( commands=["dbt build"], project_dir="/path/to/dbt/project", profiles_dir="/path/to/profiles", ) result.run()The integration is operational: it runs dbt commands well. Retries, logging, and artifact capture work as expected. But it doesn’t map dbt models into Prefect’s internal model the way dagster-dbt does with assets. Prefect tracks that a flow ran and its tasks succeeded or failed. It doesn’t automatically know that mrt__finance__monthly_revenue is a data product with freshness requirements.
Prefect’s strength is in orchestrating surrounding workflows. When dbt is one step in a larger Python-heavy pipeline — API extraction, dbt transformations, downstream delivery — Prefect’s function-oriented model handles the full composition. The dbt integration depth matters less when dbt is a small piece of a larger flow.
The Integration Depth Spectrum
The three integrations sit on a spectrum from operational to semantic:
| Aspect | dagster-dbt | astronomer-cosmos | prefect-dbt |
|---|---|---|---|
| Abstraction | Asset per model | Task per model | CLI command |
| Lineage | Full asset graph | DAG dependencies | Flow/task logs |
| dbt tests | Asset checks | Task success/fail | CLI output |
| Freshness | Native per-asset | Not tracked | Not tracked |
| Metadata | Owners, tags, columns | Task-level metadata | Artifacts |
| Selection | dbt select syntax | dbt select syntax | CLI arguments |
| Non-dbt assets | Unified graph | Separate DAGs | Separate flows |
When Integration Depth Matters
The depth gap surfaces in specific operational scenarios:
Incident triage. A stakeholder reports that a dashboard shows stale data. In Dagster, you open the asset catalog, find the mart model, and see its freshness status, last materialization time, and upstream health in one view. In Airflow, you find the DAG, check the task history, then manually trace backward through upstream DAGs to find the root cause. In Prefect, you search flow run logs.
Impact analysis. You need to modify a shared intermediate model. In Dagster, the asset graph shows every downstream asset that depends on it, including non-dbt assets. In Airflow, you see downstream tasks within the same DAG but may miss cross-DAG dependencies. In Prefect, cross-flow dependencies aren’t tracked.
Freshness monitoring. A mart model needs to be no more than 6 hours old. In Dagster, you set a freshness policy on the asset and get alerts when it’s violated. In Airflow or Prefect, you implement custom monitoring logic outside the orchestrator.
For teams whose primary workload is dbt transformations, you’ll feel this gap every time something breaks. If dbt is one small piece of a larger Python-heavy pipeline, the integration depth becomes less of a factor — the orchestrator’s value is in coordinating the full workflow, not in deep dbt awareness.
Convergence
Airflow 3.0’s @asset decorator and cosmos’s support for it indicate movement toward data awareness in task-oriented tools. Whether this reaches parity with dagster-dbt’s depth is an open question — the architectural difference (assets as the foundational abstraction versus assets added as a layer) may constrain how far Airflow can go without a deeper rearchitecture.
For teams choosing today: if the pipeline is 80% dbt transformations, dagster-dbt’s depth is relevant. If dbt is 20% of a heterogeneous pipeline, the orchestrator’s broader capabilities outweigh dbt-specific depth.