Lightdash metrics sit under the meta: block in your dbt YAML, which means they live in the same file as your model documentation, in version control, reviewed in pull requests. The syntax is different from MetricFlow — Lightdash predates the MetricFlow spec and has its own vocabulary — but the underlying principle is the same: define the calculation once, governed in code, consumed by the BI layer.
Lightdash divides metrics into three categories. The distinction is not cosmetic — it determines what a metric can reference and how it is evaluated.
The Three Categories
Aggregate metrics perform SQL aggregations on dimensions. These are the foundation of your metric layer. They reference dimensions using ${column_name} and produce a single aggregated value when users query.
Non-aggregate metrics calculate values from other metrics. They cannot touch raw columns directly. They reference existing metrics using ${metric_name}. Ratios, percentages, and derived calculations belong here.
Post-calculation metrics (experimental) apply window functions after other metrics resolve. They include percent_of_previous, percent_of_total, and running_total. They’re useful for contribution analysis and cumulative views, but their experimental status means they may behave unexpectedly with complex dimension combinations.
The rule that trips people up: aggregate metrics can only reference dimensions, non-aggregate metrics can only reference other metrics. Mixing the two in a single sql expression will error. This is not a bug — it reflects a real distinction in how SQL aggregations work. You cannot compute SUM(revenue) / total_orders in a single expression because total_orders is itself an aggregate that must be computed separately.
Available Metric Types
| Category | Type | What It Does |
|---|---|---|
| Aggregate | sum | Sum of a dimension |
| Aggregate | count | Row count |
| Aggregate | count_distinct | Unique values |
| Aggregate | average | Mean |
| Aggregate | min / max | Minimum or maximum |
| Aggregate | percentile | Nth percentile (with percentile property) |
| Aggregate | median | Median value |
| Non-aggregate | number | Math on other metrics |
| Non-aggregate | boolean | Conditional logic on metrics |
| Non-aggregate | string / date | Formatting or manipulation |
| Post-calc | percent_of_total | Percentage contribution |
| Post-calc | running_total | Cumulative sum |
Column-Level vs Model-Level Placement
Where a metric is defined determines what it can reference. The placement rule follows a simple logic: if the metric operates on a single column, define it under that column. If it combines multiple metrics, it goes under the model.
models: - name: mrt__sales__orders meta: primary_key: order__id metrics: average_order_value: # model-level: references two other metrics type: number sql: "${total_revenue} / NULLIF(${total_orders}, 0)" format: '[$€]#,##0.00' description: "Average revenue per order, post-discount" columns: - name: order__id meta: metrics: total_orders: # column-level: aggregates this column type: count_distinct description: "Number of unique orders" - name: order__revenue meta: dimension: type: number hidden: true metrics: total_revenue: # column-level: aggregates this column type: sum format: '[$€]#,##0.00' description: "Sum of order revenue in EUR"total_orders and total_revenue are aggregate metrics. Each lives under the column it aggregates. average_order_value is a non-aggregate metric that divides one aggregate by another — it goes under the model’s meta.metrics block because it references metrics, not columns.
The NULLIF(${total_orders}, 0) pattern is standard for any division-based metric. Without it, empty date ranges (zero orders) produce a division-by-zero error rather than a null value. Always use NULLIF for the denominator in non-aggregate metrics.
Metric Configuration Properties
filters
Available on aggregate metrics only. Pre-filters the metric’s underlying data before aggregation:
columns: - name: order__revenue meta: metrics: completed_revenue: type: sum filters: - field: order__status operator: equals value: completedThis is different from filtering in the Explore view. A filters property at the metric level is permanent — completed_revenue always excludes non-completed orders, regardless of what a user filters in the UI. Use this for metrics where the filter is part of the definition, not an ad-hoc user choice.
format and compact
format uses the same spreadsheet-style patterns as dimensions. compact abbreviates large numbers:
metrics: total_revenue: type: sum format: '[$€]#,##0.00' compact: millions # displays as "€1.2M" instead of "€1,234,567.89"Compact options: thousands, millions, billions. Use sparingly — abbreviated numbers lose precision and can mislead in tables where exact values matter.
show_underlying_values
Controls which dimensions appear when a user drills into a metric result:
metrics: total_revenue: type: sum show_underlying_values: - order__id - order__created_at - order__customer_id - order__statusWithout this property, Lightdash shows all non-hidden dimensions in the drill view. Specifying show_underlying_values restricts the drill to the most relevant fields, reducing noise for end users.
groups
Assigns the metric to a sidebar section. Covered in Organizing Lightdash Metrics at Scale.
How This Compares to MetricFlow
If you’re familiar with MetricFlow’s metric types, the Lightdash approach feels simpler but less expressive. Lightdash has no concept of semantic models, entities, or measures — you go directly from dbt columns to metrics. There’s no fanout protection built into the metric definition (that’s handled at the join level), and no period-over-period calculation type.
The simplicity is intentional. Lightdash’s metric layer is tightly scoped to what’s in your dbt project YAML. MetricFlow’s is more general — it’s designed to serve any downstream tool, not just one BI product. The tradeoff: Lightdash metrics only work in Lightdash. MetricFlow metrics can feed any tool that speaks the dbt Semantic Layer API.
Neither is objectively better. The question is whether you want metrics that live entirely within one BI tool’s ecosystem (Lightdash’s approach) or metrics that can serve multiple consumers through a tool-agnostic API (MetricFlow’s approach). Both qualify as Metrics as Code — definitions in version control, reviewed in pull requests, tested in CI.
Promoting Ad-Hoc Metrics to Governed Definitions
Lightdash supports custom metrics created directly in the Explore view — clicking a dimension and choosing an aggregation. These are per-chart and not globally governed. When a custom metric proves useful and analysts start replicating it across charts, that’s the signal to promote it.
The write-back feature (shipped early 2025) creates a pull request to your dbt YAML directly from the Lightdash UI. An analyst can define a custom metric in Explore, verify it against actual data, and then promote it to version control without writing YAML manually. The PR goes through the normal code review process. Once merged, the metric is globally available and governed like any other definition.
This workflow — prototype in UI, codify in PR — is the practical path for teams that want governance without requiring analysts to edit YAML by hand.