These MetricFlow anti-patterns appear repeatedly in projects that grew without a clear metric strategy. Each one technically works but creates problems at scale.
One-Off Models for Metrics
The most common anti-pattern: creating a new dbt model just to define a metric.
-- Bad: models/metrics/monthly_revenue.sqlSELECT DATE_TRUNC('month', ordered_at) AS month, SUM(amount) AS monthly_revenueFROM {{ ref('mrt__finance__orders') }}GROUP BY 1This model duplicates transformation logic that belongs in the semantic layer. The orders mart already has the amount column and ordered_at timestamp. A MetricFlow metric on top of that mart handles the aggregation and time granularity natively. The one-off model adds a table to your warehouse, a node to your DAG, and a maintenance burden — all for something that MetricFlow computes on the fly.
The pattern is especially insidious because it works. The table exists. Dashboards can query it. But now you have two places where “monthly revenue” is calculated: the one-off model and whatever metric definition eventually gets created. They will diverge.
The fix is simple: define the metric on your existing mart and let MetricFlow handle the aggregation.
metrics: - name: monthly_revenue label: "Monthly Revenue" description: "Sum of order amounts, aggregated monthly" type: simple type_params: measure: order_amountKeep your project structure clean by avoiding one-off models that exist only for metric computation. If a calculation needs a model, it should be a reusable intermediate or mart model — not a single-purpose aggregation that duplicates the semantic layer’s job.
Sum of Ratios
This anti-pattern is mathematically dangerous because it produces silently incorrect results.
# Wrong: will give mathematically incorrect results- name: avg_conversion_rate type: simple type_params: measure: conversion_rate # This is already a percentage per rowIf the underlying table has a pre-computed conversion_rate column (conversions divided by sessions per row), averaging those percentages gives the wrong answer. Store A with 100 sessions and 50% conversion + Store B with 10,000 sessions and 2% conversion does not average to 26%. The correct overall rate is closer to 2.5%, weighted by session volume.
The fix is a ratio metric with separate numerator and denominator measures:
measures: - name: conversions agg: sum - name: sessions agg: sum
metrics: - name: conversion_rate type: ratio type_params: numerator: conversions denominator: sessionsThe semantic layer computes the ratio after aggregating the components. This is mathematically correct regardless of how the query groups the data.
The rule: never aggregate a pre-computed ratio. Always keep numerator and denominator as separate measures and let MetricFlow divide them at query time.
Hardcoded Filters in Measures
Measures with embedded filters reduce flexibility and create hidden assumptions:
# Inflexiblemeasures: - name: enterprise_revenue expr: CASE WHEN segment = 'enterprise' THEN amount END agg: sumThis measure can only ever compute enterprise revenue. If marketing needs SMB revenue, they need a new measure. If the executive team wants startup revenue, another measure. Each one duplicates the amount aggregation with a different filter.
Better to define a general measure and apply filters at the metric level:
measures: - name: revenue expr: amount agg: sum
metrics: - name: enterprise_revenue type: simple type_params: measure: revenue filter: - "{{ Dimension('customer__segment') }} = 'enterprise'"
- name: smb_revenue type: simple type_params: measure: revenue filter: - "{{ Dimension('customer__segment') }} = 'smb'"Now you can create any segment-specific revenue metric without defining new measures. The base measure stays clean and reusable. The filter lives where it belongs — at the metric level, where it is visible and documented.
This also makes ratio calculations cleaner. A conversion rate filtered by segment uses the same general conversions and sessions measures with segment-specific filters, rather than requiring separate enterprise_conversions and enterprise_sessions measures.
Missing Descriptions
Metrics without descriptions become mysteries within weeks:
- name: arr type: simple type_params: measure: arr_valueWhat does arr include? Is it annualized monthly recurring revenue? Does it count only active subscriptions? Are one-time fees excluded? Usage-based charges? Trial accounts?
Without a description, someone will answer these questions by reading the SQL in the underlying model, tracing the measure definition, and maybe checking the mart logic. That is 15 minutes of detective work that a single paragraph would eliminate.
Always include description and label:
- name: arr label: "Annual Recurring Revenue" description: > Sum of annualized contract values for active subscriptions, excluding one-time fees and usage-based charges. Matches the finance team's ARR definition as of Q1 2026. type: simple type_params: measure: arr_valueGood descriptions answer three questions:
- What does it measure? “Sum of annualized contract values” — not just “ARR.”
- What does it exclude? “Excluding one-time fees and usage-based charges.”
- Whose definition does it match? “Matches the finance team’s ARR definition” — this is the social contract that prevents metric drift.
Descriptions are not documentation overhead. They are the mechanism by which metrics-as-code delivers trust. A metric definition without a description is a calculation without context — technically correct but practically useless for governance.
Overusing Derived Metrics for Ratios
A subtler variant of the sum-of-ratios problem. Teams sometimes use derived metrics for calculations that should be ratio metrics:
# Dangerous: behaves incorrectly when grouped by dimensions- name: conversion_rate type: derived type_params: expr: conversions / sessions metrics: - name: conversions - name: sessionsThis looks correct and produces the right number at the total level. But when grouped by a dimension (region, device type), MetricFlow may evaluate the expression at the individual metric level rather than aggregating numerator and denominator first. The result depends on implementation details and can produce the sum-of-ratios error.
Use type: ratio for any “X divided by Y” metric. It exists specifically to handle the aggregation order correctly.
Each anti-pattern shares a root cause: metrics treated as one-off computations rather than reusable artifacts. The metrics-as-code approach treats metrics as consumed by multiple tools, grouped by arbitrary dimensions, and maintained by people who did not write them — a framing that prevents the accumulation of one-off models, pre-computed ratios, hardcoded filters, and undocumented definitions.