Lightdash has its own semantic layer — defined in YAML alongside your dbt project — and it is not MetricFlow. This distinction matters more than it might seem: they’re both metrics defined in code, both live in your dbt repository, and both generate SQL against your warehouse. But they make different architectural decisions, and choosing one commits you to a different set of tradeoffs.
What Lightdash’s Layer Looks Like
Lightdash metrics reference dbt columns directly using ${column_name} syntax. There’s no separate “semantic model” concept. Metrics sit under the meta: block in the same YAML files where you document your dbt models:
models: - name: mrt__sales__orders meta: primary_key: order__id metrics: average_order_value: type: number sql: "${order__total_revenue} / NULLIF(${order__orders}, 0)" format: '[$€]#,##0.00' columns: - name: order__id meta: metrics: order__orders: type: count_distinct - name: order__revenue meta: dimension: hidden: true metrics: order__total_revenue: type: sum format: '[$€]#,##0.00'The column order__revenue becomes both a dimension and the basis for an aggregate metric. average_order_value divides two other metrics — no new language concepts required. You’re working in familiar YAML with a handful of Lightdash-specific keys.
What MetricFlow Looks Like
MetricFlow requires more structure. Metrics build on top of “semantic models” which build on top of your dbt models. The concepts are distinct: entities, measures, and dimensions are separate objects with distinct roles:
semantic_models: - name: orders defaults: agg_time_dimension: order_date model: ref('mrt__sales__orders') entities: - name: order_id type: primary - name: customer_id type: foreign measures: - name: order_total agg: sum expr: order__revenue - name: order_count agg: count_distinct expr: order__id dimensions: - name: order_date type: time type_params: time_granularity: day
metrics: - name: total_revenue type: simple type_params: measure: order_total - name: average_order_value type: derived type_params: expr: total_revenue / order_count metrics: - name: total_revenue - name: order_countThe separation is deliberate. Semantic models define the mapping between warehouse tables and business entities. Metrics compose on top of measures. Dimensions and entities control how joins work. The result is a more expressive specification — but one that requires learning a distinct vocabulary and maintaining separate YAML files.
The Key Differences
Complexity and Adoption Friction
Lightdash’s approach is simpler. You add meta: blocks to model YAML files you’re already editing. No new file type, no new concepts like entities or measures. A dbt user familiar with schema YAML can add Lightdash metrics within minutes.
MetricFlow requires learning the semantic model / measure / metric / entity hierarchy, creating new .yml files for semantic models, and understanding how entities drive join behavior. The learning curve is steeper, and the setup takes longer before anything useful appears.
For teams with variable analytics engineering experience, or where adoption friction is a real risk, Lightdash’s lower barrier often determines whether the metric layer gets built or stays at “we’ll do it later.”
Tool Coupling
Lightdash metrics only work in Lightdash. If you define average_order_value in Lightdash’s meta: block, it is available in Lightdash’s Explore view and nowhere else. You cannot query it from a different BI tool, from a REST API, from a Slack bot, or from a custom application.
MetricFlow metrics are tool-agnostic. The dbt Cloud Semantic Layer API exposes them via JDBC, GraphQL, and REST — any downstream consumer that speaks those protocols can query governed metrics. Looker, Tableau, Sigma, AI agents, custom applications, and Slack integrations can all read the same MetricFlow metric definitions. This is the headless BI promise applied to dbt.
The coupling question matters most in two scenarios:
-
Multiple BI tools. If you run Lightdash for analysts and want metric definitions accessible to a customer-facing dashboard built in React, you need either MetricFlow (tool-agnostic API) or a custom layer on top of Lightdash’s API (Cloud Pro only).
-
BI tool migration. If Lightdash stops meeting your needs and you switch to a different tool, Lightdash metric definitions don’t transfer. MetricFlow definitions do, for any tool that connects to the dbt Semantic Layer API.
Join Semantics
Lightdash handles joins through explicit meta.joins configuration on the primary model. You define the join SQL and declare the relationship cardinality (many-to-one, one-to-many, one-to-one). Lightdash uses the cardinality declaration to warn about fanout risk on aggregate metrics — see Lightdash Joins and Fanout Protection for the full details.
MetricFlow uses an entity-based join system. Joins are inferred from shared entity references across semantic models. Two models with a customer_id foreign key and a customer_id primary key join automatically without explicit SQL — MetricFlow resolves the relationship from the entity graph. This is more powerful for complex multi-model scenarios but requires correctly declaring entity types across all semantic models.
Time Intelligence
MetricFlow has built-in support for time intelligence: period-over-period comparisons, cumulative metrics, and conversion metrics (tracking events within a time window). These are first-class metric types that Lightdash doesn’t have equivalents for — Lightdash’s post-calculation metrics (percent_of_previous, running_total) are experimental and more limited.
If period-over-period analysis is a core requirement — comparing this month to last month, tracking 7-day rolling averages — MetricFlow handles it more reliably.
When Lightdash’s Layer Fits
Lightdash’s approach makes sense when:
- One BI tool for everything. Your team queries Lightdash and nothing else. No external consumers need metric definitions via API.
- Adoption is the primary risk. Analytics engineers need to get metrics defined quickly, without a steep learning curve. Shipping 80% of the metric layer that everyone uses beats a complete MetricFlow implementation that stays on a backlog.
- dbt v1.9 or earlier. MetricFlow is deeply integrated with dbt Core starting around v1.6, but the specification evolved significantly through v1.10 and the Fusion engine. Teams on older dbt versions may find Lightdash’s
meta:approach more stable.
When MetricFlow Fits
MetricFlow is the better choice when:
- Multiple consumers. Metrics need to be available to BI tools, AI agents, embedded analytics, and custom applications through a stable API.
- Time intelligence requirements. Period-over-period, cumulative, and conversion metrics are core to your analysis workflows.
- Tool flexibility. You’re not certain Lightdash will remain your BI tool long-term and want metric definitions that transfer.
- Complex join graphs. Entity-based joins across many models are cleaner to define and maintain than explicit join SQL across many
meta:blocks.
They’re Not Mutually Exclusive
Lightdash can query the dbt Cloud Semantic Layer as a secondary data source. Teams that have built MetricFlow definitions for their canonical metrics can surface those in Lightdash alongside Lightdash’s native metrics.
This hybrid path is uncommon but it exists. The practical pattern: define high-priority, governance-critical metrics in MetricFlow (available across tools via API), and use Lightdash’s native meta: approach for exploratory metrics that matter only within the Lightdash context.
The important thing to avoid: maintaining the same metric definition in both systems simultaneously. That defeats the purpose of metrics in version control — you’re back to multiple sources of truth with drift risk between them. Pick one approach as primary for any given metric.
For the broader context of where these semantic layers sit in the data stack, see Semantic Layer Architecture, which compares MetricFlow against Snowflake Semantic Views and Databricks Metric Views.