Local development, testing, and the PR review workflow are where the day-to-day friction shows up for Dagster, Airflow, and Prefect — and where the three diverge most sharply.
Local Development
Dagster: Full UI, No Docker
Dagster gives you the best local development experience of the three. Run dagster dev and you get the full UI on localhost:3000, no Docker required. You can materialize individual assets, inspect lineage, and see run logs, all on your laptop.
# Start local Dagster with full UIpip install dagster dagster-webserver dagster-dbtdagster devThe local instance is functionally identical to the production deployment. You see the same asset graph, the same materialization history (for local runs), and the same lineage visualization. For dbt integration specifically, the manifest is generated locally via prepare_if_dev(), so your asset graph reflects your current branch’s dbt models.
Testing is built around typed I/O and pluggable resources. Swap your BigQuery resource for an in-memory one and your tests run without hitting the warehouse:
from dagster import build_asset_context
def test_daily_metrics(): # Uses in-memory resources, no warehouse connection needed context = build_asset_context( resources={"bigquery": MockBigQueryResource()} ) result = daily_metrics(context) assert result is not NonePrefect has argued that Dagster requires “50+ lines of mock setup” per test. That’s an exaggeration for typical asset tests, but resource configuration does add boilerplate compared to testing a plain Python function. The trade-off is that Dagster’s resource system enables genuine environment isolation — the same code runs against different backends in dev, staging, and production — while simpler approaches often have subtle environment-specific bugs.
Airflow: Docker Required, Improving
Airflow has historically been painful locally. You need Docker, typically via the Astro CLI or docker-compose, and iteration is slow because the scheduler needs to parse DAG files. The startup overhead alone — pulling images, starting the scheduler, webserver, and metadata database — can take minutes.
# Astro CLI (Astronomer's managed Airflow)astro dev initastro dev start # Starts Docker containers for scheduler, webserver, etc.Airflow 2.5 added dag.test() to run all tasks in a single Python process, which helped significantly for rapid iteration:
# Quick local testing without full Airflow infrastructureif __name__ == "__main__": my_dag.test()TRM Labs documented making Airflow development 20x faster with custom tooling, dropping dev environment cost from $200/month to $0. But their approach required significant investment in developer tooling that most teams won’t make — custom local runners, file watchers, and environment simulation. Out of the box, local Airflow development remains heavier than the alternatives.
The Astro CLI improves the default experience substantially. Hot-reloading of DAG files, pre-configured Docker images, and environment management bring it closer to Dagster’s dagster dev simplicity. For teams using Astronomer’s managed Airflow, the local dev story is meaningfully better than bare Airflow.
Prefect: Lowest Friction
Prefect is the simplest to get running locally. Install the package, start the server, and your flows are Python functions you can call directly:
pip install prefectprefect server start # Optional - runs without server too
# Your flow is just a Python function - call it directlypython my_flow.pyTests are standard pytest with no framework mocking needed:
def test_my_pipeline(): # Call the flow function directly - it's just Python result = my_pipeline(sources=["test_source"]) assert result is not NoneNo Docker, no resource configuration, no manifest generation. A Python function decorated with @flow runs exactly like a Python function. This simplicity is Prefect’s strongest argument for small teams that want orchestration without framework overhead.
The trade-off is that Prefect’s testing simplicity reflects its simpler abstraction. There’s no concept of resource injection or environment-specific configuration built into the test framework, because Prefect doesn’t have Dagster’s resource system. For teams that need different backends in different environments, that configuration management happens outside the orchestrator.
CI/CD Workflows
Dagster+ Branch Deployments
The CI/CD differentiator for Dagster is branch deployments. When you open a pull request, Dagster+ automatically spins up an ephemeral preview environment with your code changes. Reviewers can:
- Inspect the asset graph diff — which assets are new, modified, or removed
- Trigger test materializations in the preview environment
- Verify that asset checks pass against real data
- See lineage changes alongside the code diff
For dbt teams specifically, this means seeing how your SQL changes affect the full data graph before merging. A PR that renames a model, adds an upstream dependency, or changes an incremental strategy produces a visual diff that’s easier to review than reading SQL alone.
No other orchestrator offers this out of the box. For teams doing frequent iteration on dbt models, branch deployments change the PR review workflow entirely — from “review the code and trust that CI tests are sufficient” to “review the code and see the data impact.”
Branch deployments are a Dagster+ (managed cloud) feature. Self-hosted Dagster OSS doesn’t include them.
Airflow CI/CD
Airflow CI/CD is more manual. Typical setups use GitHub Actions or similar to:
- Parse DAGs with
python -c "import my_dag"to catch syntax errors - Run unit tests for custom operators and plugins
- Deploy DAG files to the DAG storage bucket (GCS for Cloud Composer, S3 for MWAA)
There’s no built-in preview environment. Some teams build custom staging environments, but this requires maintaining a second Airflow instance with its own infrastructure cost. Astronomer’s deployment pipeline improves this with image-based deploys and environment promotion.
# Typical GitHub Actions CI for Airflow- name: Validate DAGs run: | python -c "from airflow.models import DagBag; db = DagBag('.'); assert not db.import_errors"- name: Run tests run: pytest tests/- name: Deploy run: gsutil rsync -r dags/ gs://composer-bucket/dags/Prefect CI/CD
Prefect’s CI/CD is the lightest. Flows are Python functions, so standard Python CI applies — linting, type checking, pytest. Deployment uses prefect deploy or prefect.yaml configuration:
deployments: - name: daily-pipeline entrypoint: flows/pipeline.py:my_pipeline work_pool: name: my-cloud-run-pool schedule: cron: "0 6 * * *"No preview environments, but the simplicity of the deployment model means deployments are fast and rollbacks are straightforward — deploy a previous commit’s flow definition.
The Developer Experience Summary
| Aspect | Dagster | Airflow | Prefect |
|---|---|---|---|
| Local startup | dagster dev (seconds) | Docker + Astro CLI (minutes) | prefect server start (seconds) |
| Local UI | Full asset graph | Full Airflow UI (in Docker) | Flow run tracking |
| Test approach | Resource injection, typed I/O | dag.test(), custom fixtures | Standard pytest |
| Test isolation | Resource swapping | Docker-based isolation | None needed (plain Python) |
| PR preview | Branch deployments (Dagster+) | Manual staging environments | Not available |
| Deploy mechanism | Image + manifest deploy | DAG file sync | prefect deploy |
| Hot reload | Yes (local) | Yes (Astro CLI) | Yes (by re-running) |
For teams evaluating these tools, spending a day building a simple pipeline in each is more informative than any comparison table. The friction points are personal — what feels natural to one team feels constraining to another. The recommendation: prototype with your actual dbt project, not a toy example, and pay attention to the debugging workflow when something inevitably breaks.