The orchestration question for analytics engineers used to be simple: Airflow, or do it yourself. That era is over. Airflow 3.0 shipped its biggest update in five years, Dagster quietly became the default for dbt-centric teams, Prefect rebuilt its engine from scratch, and Kestra emerged from nowhere to 20,000 GitHub stars. Meanwhile, the Fivetran–dbt Labs merger reshuffled the strategic calculus for anyone relying on dbt Cloud’s built-in scheduler.
If you’re an analytics engineer wondering whether you need an orchestrator, which one fits your stack, or whether a $0/month cron job is genuinely enough, this is the map.
The state of play in 2026
The market has stratified into clear tiers, and the numbers tell a story.
Apache Airflow still dominates by sheer scale. It crossed 30 million monthly PyPI downloads, runs in 80,000+ organizations, and has attracted 3,600+ unique contributors (more than Apache Spark or Kafka). Airflow 3.0, released April 2025, was the biggest release since the 2.x rewrite: DAG versioning, a new React UI, the Task Execution Interface, and an @asset decorator that borrows directly from Dagster’s playbook. By March 2026, Airflow reached v3.1.7.
Dagster leads the “modern challenger” tier. With $47M in total funding (Series B, May 2023), roughly $7.2M ARR as of late 2023, and 15M PyPI downloads in 2024, it had the most active codebase by commit volume, with 27K commits in 2024. The Components framework reached GA in the 1.12.x cycle, and 50% of Dagster Cloud users integrate dbt. That’s the highest dbt adoption rate of any orchestrator, and it’s not close.
Prefect ranks second in raw downloads at 32M PyPI downloads in 2024 and maintains a 25,000-member Slack community. Prefect 3.0 (September 2024) introduced transactional semantics, open-sourced its event-driven engine, and reduced engine overhead by 90%+ over version 2. It’s the tool that feels most like writing normal Python.
Kestra is the fastest-growing newcomer. After securing $8M in seed funding in September 2024, with investors including dbt Labs’ Tristan Handy and Airbyte’s Michel Tricot, it launched Kestra 1.0 on September 9, 2025. Enterprise customers include Apple, Toyota, Bloomberg, and JPMorgan Chase. Its 20,000+ GitHub stars made it the fastest-growing orchestration project in 2024 by star velocity, though practitioner Daniel Beach noted that actual production adoption may lag behind the star count.
On the decline: Luigi received only minor bug fixes in 2024. Azkaban had zero code activity. Oozie is legacy Hadoop-era tooling. Mage (v0.9.79, ~8,500 stars) still exists but concerns mount about fewer than 5 active contributors, raising sustainability questions.
The shift that changes everything: tasks vs. assets
Beyond any single tool release, the defining trend is a shift in the fundamental abstraction: from tasks to assets.
In a task-based model (traditional Airflow), you define what tasks to run and their order. The orchestrator has no awareness of the data being produced. You can see that a task succeeded at 8:03 AM. You can’t see whether the data it produced is fresh, correct, or complete.
In an asset-based model (Dagster’s Software-Defined Assets), you define what data products should exist and how they should be produced. The orchestrator tracks lineage, freshness, and materialization states automatically. You don’t ask “did the task run?” You ask “is this table up to date?”
This maps directly to how analytics engineers already think. Every dbt model is conceptually an asset: a table with upstream dependencies (ref()) and transformation logic (SQL). Dagster’s asset graph is the dbt DAG, extended. Ingestion sources, Python transformations, ML models, and BI dashboard refreshes can all participate in the same dependency graph.
Airflow 3.0’s introduction of Assets (evolved from Datasets in Airflow 2.4) validates this paradigm shift. But as one practitioner noted: “I don’t believe you can just switch from being task-oriented to assets. This is a much deeper shift that is hard to get for Airflow. Dagster is still miles ahead.” The asset concept in Airflow feels added on. In Dagster, it’s the foundational abstraction everything else builds from.
Kestra approaches the problem differently, championing declarative orchestration via YAML. Its CEO Emmanuel Darras argues that “dbt proved declarative at the transformation layer. The same model is now extending to the full orchestration stack.” Whether YAML definitions or Python decorators win the declarative war remains to be seen, but the direction is clear: the industry is moving away from imperative task graphs.
The Fivetran–dbt Labs merger and what it means
On October 13, 2025, Fivetran and dbt Labs announced an all-stock merger, creating a combined entity approaching $600M ARR under CEO George Fraser, with Tristan Handy as President. Their stated goal: “open data infrastructure” unifying data movement, transformation, metadata, and activation. Fivetran also acquired Census (reverse ETL, May 2025) and Tobiko Data/SQLMesh (September 2025).
This consolidation directly affects orchestration choices. dbt Cloud’s roadmap is now tied to Fivetran’s broader platform play. Teams that want orchestration independence, the ability to choose their ingestion, transformation, and serving tools freely, need an orchestration layer they control.
The merger actually makes external orchestration more strategically important. Dagster and Prefect both position themselves as the neutral “glue layer” between best-of-breed tools. For consulting clients, this is a key argument. Recommending Dagster or Airflow as the orchestration layer gives clients vendor optionality that a dbt Cloud-only approach cannot provide.
Do you actually need an orchestrator?
For teams doing only dbt transformations on a single project, a full orchestrator introduces complexity that may not be justified. There’s a dramatic cost cliff between simple solutions ($0-$5/month) and managed orchestrators ($250-$500+/month).
A daily dbt build triggered by Cloud Scheduler + Cloud Run costs $0-$3/month on GCP’s free tier. GitHub Actions can schedule dbt runs within its free 2,000 minutes/month for private repos. dbt Cloud’s built-in scheduler handles cron scheduling, source freshness checks, and CI builds on PRs. These genuinely work.
Concrete triggers for upgrading: 5+ interconnected data sources, cross-team collaboration requirements, SLA/freshness commitments, event-driven (not just time-based) scheduling needs, multiple environments (dev/prod), or compliance and audit requirements. Below those thresholds, simple wins.
But watch for what I call the trust erosion pattern. Simple orchestration doesn’t break catastrophically. Trust erodes through small misalignments. Finance gets technically correct but business-incorrect numbers. Pipelines complete “successfully” but deliver insights too late. Monitoring shows green while stakeholders see stale data. By the time the band-aids fail, migration to a real orchestrator is urgent rather than planned.
The progression is predictable. Stage 1: single cron job, dbt run && dbt test. Stage 2: multiple scripts, manual monitoring. Stage 3: Slack alerts and basic error handling. Stage 4: inter-pipeline dependencies grow. Stage 5: you wish you’d migrated six months earlier.
If you’re in Stages 1–2, stay simple (see when cron is enough). If you recognize Stage 3, start evaluating.
Matching the tool to the team
Your choice of orchestrator should match how your team thinks about data, not the latest Hacker News thread. For a more detailed side-by-side evaluation, see my decision framework.
Analytics engineers → Dagster
Analytics engineers think in models, refs, and tables. They want scheduled dbt jobs, dependency management between models, freshness monitoring, and visual lineage. Their preferred approach is declarative: they care about what data should exist, not how tasks execute.
Dagster’s asset-centric model maps directly to this mental model. Each dbt model becomes a Dagster asset with automatic lineage, asset checks from dbt tests, and freshness tracking. The 50% dbt overlap among Dagster Cloud users tells you something about product-market fit.
The learning curve is steep, though. Python decorators, resource configuration, manifest management, and environment setup all require investment. Dagster University (free) and the Components framework are closing this gap.
Data engineers → Airflow
Data engineers think in infrastructure, reliability, and cross-system coordination. They want Python-based workflow definition, container orchestration, event-driven triggers, and broad integration with cloud services. Airflow’s 90+ provider packages, 80,000+ organizations, and battle-tested reputation make it the safe choice for heterogeneous infrastructure.
It’s also the safest career bet: 94% of Airflow users say Airflow knowledge positively impacts their careers. For enterprise teams with DevOps capacity, it remains the default.
Python-first speed → Prefect
Prefect wins when speed of setup matters most and your team values writing normal Python without framework abstractions. Flows are just decorated functions. Testing is standard pytest. Prefect Cloud’s free tier lets you start without a credit card.
Endpoint Closing achieved a 73.78% reduction in invoice costs switching from Astronomer to Prefect. LiveEO reported tripling development speed after adoption. For small to mid-size teams (2–10) who prioritize developer velocity over data-aware orchestration, Prefect is hard to beat.
YAML declarative → Kestra (with caveats)
Kestra’s YAML-first approach appeals to teams that want orchestration without writing Python. It’s language-agnostic, supports 600+ plugins, and its UI lets you build workflows visually. The enterprise customer list is impressive for a tool that only hit 1.0 in September 2025.
Production adoption evidence at small-to-mid scale is thin, though. The 20K stars are encouraging, but stars don’t equal production deployments. I’d watch this space for another 6-12 months before recommending it for critical workloads.
What to watch for the rest of 2026
Airflow 3.x asset maturation. The @asset decorator shipped, but can Airflow make asset-centric thinking a first-class experience, or will it remain a feature grafted onto a task-based paradigm? The 3.x release cycle will answer this.
Kestra’s production adoption reality check. 20K stars need to translate into production case studies at the scale where analytics engineers actually operate. If that evidence materializes, Kestra becomes a serious contender for dbt-centric teams who prefer YAML over Python.
The Fivetran–dbt integration roadmap. Will the combined entity create a walled garden that makes external orchestration harder, or an open platform that makes it optional? The answer shapes whether dbt Cloud’s built-in scheduler is “good enough” or strategically risky.
Dagster Components lowering the barrier. The Components framework (GA in 1.12) and the dg CLI are designed to make Dagster accessible to SQL-first analytics engineers who find the Python overhead intimidating. If they succeed, Dagster’s learning curve objection weakens considerably.