Kestra uses YAML to define workflows rather than Python decorators, making it language-agnostic and accessible to teams without Python fluency. Its scope is broader than dbt-specific orchestrators: orchestrating any workflow, in any language, with a configuration-first approach.
CEO Emmanuel Darras describes the thesis as: “dbt proved declarative at the transformation layer. The same model is now extending to the full orchestration stack.” The parallel holds — dbt succeeded partly because SQL practitioners didn’t need Python to transform data. Kestra applies the same model to orchestration.
The YAML-First Model
Kestra workflows are YAML files. No Python classes, no decorators, no manifest management. A workflow defines triggers, tasks, inputs, and outputs in a configuration format that any developer can read:
id: daily_dbt_buildnamespace: analyticstasks: - id: run_dbt type: io.kestra.plugin.dbt.cli.DbtCLI commands: - dbt build projectDir: /path/to/projecttriggers: - id: daily type: io.kestra.core.models.triggers.types.Schedule cron: "0 6 * * *"The appeal is immediate for teams where not everyone writes Python. A data analyst can read and modify a YAML workflow. A DevOps engineer familiar with Kubernetes manifests or GitHub Actions workflows finds the syntax natural. The learning barrier is configuration literacy, not programming language proficiency.
Kestra supports 600+ plugins spanning databases, cloud services, messaging systems, and file formats. Its UI lets you build workflows visually, which lowers the barrier further for teams that prefer graphical tools over code-first approaches.
The Growth Story
Kestra’s trajectory is striking. It secured $8M in seed funding in September 2024, with a notable investor list that includes dbt Labs’ Tristan Handy and Airbyte’s Michel Tricot — two figures deeply embedded in the modern data stack ecosystem. Kestra 1.0 launched on September 9, 2025, marking its transition from pre-release to production-ready status.
The enterprise customer list includes Apple, Toyota, Bloomberg, and JPMorgan Chase. Its 20,000+ GitHub stars made it the fastest-growing orchestration project in 2024 by star velocity.
The Production Adoption Question
Stars measure awareness, not production deployments. Kestra’s 20K GitHub stars reflect growth, but as practitioner Daniel Beach has noted, actual production adoption may lag behind the star count.
Enterprise adoption often means a single team running a proof of concept, not organization-wide production deployment. Published evidence at the scale where most analytics engineers operate — small-to-mid-size teams (2–15 people) running daily dbt builds, coordinating ingestion with transformation, and needing reliable freshness monitoring — is limited as of early 2026.
Dagster has this evidence: half of its cloud customers run dbt. Airflow has 80,000+ organizations. Prefect has documented case studies with concrete cost savings. Kestra’s production story at the analytics engineering scale is still accumulating.
YAML vs. Python Decorators
The YAML-vs-Python question isn’t just a syntax preference. It reflects a deeper architectural choice about what the orchestration tool optimizes for.
YAML advantages:
- Language-agnostic: teams writing SQL, R, Scala, or shell scripts can define orchestration without learning Python
- Lower barrier for non-developers: analysts and ops teams can read and modify workflows
- Familiar to anyone who works with Kubernetes, GitHub Actions, or dbt’s
schema.yml - Configuration is inherently serializable and diffable
Python decorator advantages:
- Full programming language for complex logic: conditionals, loops, dynamic DAGs
- Type safety and IDE support (autocomplete, refactoring, error detection)
- Testing with standard Python tools (pytest, mock, fixtures)
- The data engineering community already knows Python
The architectural philosophies note covers the Airflow/Dagster/Prefect mental models in depth. Kestra adds a fourth philosophy: the configuration-first model, where the workflow definition is data (YAML) rather than code (Python). This maps more naturally to infrastructure-as-code thinking than to software engineering thinking.
Where Kestra Fits for dbt Teams
For analytics engineers evaluating Kestra specifically for dbt orchestration, the honest assessment:
Kestra could work well when:
- Your team is multilingual (not everyone writes Python) and you want orchestration accessible to all team members
- You value the YAML-first approach because it matches your existing configuration patterns (dbt’s YAML, Kubernetes manifests)
- You need language-agnostic orchestration across diverse systems, not just Python-based data pipelines
- You want a visual workflow builder for non-technical team members
Kestra is harder to justify when:
- You want deep dbt integration with per-model asset tracking, freshness monitoring, and lineage — dagster-dbt is meaningfully deeper here
- You need production case studies at small-to-mid scale before committing — the evidence isn’t there yet
- Your team is Python-fluent and wants the expressiveness of Python decorators for complex orchestration logic
- You need branch deployments or similar CI/CD integration for dbt projects
If production case studies materialize at the analytics engineering scale — teams of 3–15 running daily dbt builds with multi-source coordination — Kestra becomes a more grounded option. As of 2026, the more established choice for dbt-centric teams is Dagster or Airflow, depending on team profile.
The Broader Signal
Kestra’s rise reflects a broader industry shift toward declarative orchestration. Dagster implements this through asset-centric Python. Kestra implements it through YAML. Airflow’s 3.0 @asset decorator moves in the same direction.
The pattern: orchestrator users declare intent — “this table should be fresh, this pipeline should run daily, these sources should load before transformation starts” — rather than imperatively coding execution sequences.