MetricFlow is the SQL generation engine behind the dbt semantic layer. In dbt Core, it’s a separate Python package you install alongside your adapter. In dbt Cloud, it’s built in.
dbt Core installation
Install the MetricFlow bundle that matches your data platform. The package name follows the pattern dbt-metricflow[adapter]:
# Snowflakepip install "dbt-metricflow[dbt-snowflake]"
# BigQuerypip install "dbt-metricflow[dbt-bigquery]"
# Databrickspip install "dbt-metricflow[dbt-databricks]"
# Redshiftpip install "dbt-metricflow[dbt-redshift]"
# Postgrespip install "dbt-metricflow[dbt-postgres]"The brackets specify the adapter as an optional dependency. Installing dbt-metricflow without an adapter would leave you without a way to execute queries against your warehouse.
MetricFlow supports Snowflake, BigQuery, Databricks, Redshift, Postgres (Core only), and Trino. If your adapter isn’t in this list, MetricFlow cannot generate SQL for it — you’d need dbt Cloud, which has its own platform support matrix.
Pin the version in your requirements.txt alongside your adapter version:
dbt-metricflow[dbt-snowflake]==1.x.xThe MetricFlow package bundles a compatible adapter version. If you have adapter version conflicts in your environment, installing dbt-metricflow first and letting it resolve the adapter version is usually cleaner than trying to pin both independently.
dbt Cloud
No installation needed. MetricFlow is part of the dbt Cloud runtime. The CLI command is dbt sl instead of mf:
# Coremf validate-configsmf query --metrics revenue --group-by metric_time
# Clouddbt sl validatedbt sl query --metrics revenue --group-by metric_timeThe semantic model and metric YAML syntax is identical between Core and Cloud. The difference is in what happens after you define your metrics — Cloud adds JDBC, GraphQL, and REST API access that lets BI tools query the semantic layer directly. Core stays at CLI querying only.
See dbt Core vs Cloud Decision Framework for a full comparison of what’s available in each.
Required project structure
Two things need to be in place before you can define semantic models.
First, MetricFlow needs to know where your semantic model files live. In dbt_project.yml, configure the semantic-layer block:
semantic-layer: time_spine: standard_granularity_column: date_daySecond, create the time spine model. Cumulative metrics and time series gap filling both require a continuous date table. Without it, queries involving those features will fail. Creating it early avoids confusion later when a cumulative metric errors with no obvious message about why.
The minimal viable setup is:
- Install the package
- Add the
semantic-layerconfig todbt_project.yml - Create
metricflow_time_spine.sql - Run
dbt build -s metricflow_time_spine
After that, mf validate-configs should pass (assuming your semantic model YAMLs parse correctly), and mf query should work.
Verifying the installation
Run validation immediately after setup:
mf validate-configsIf this returns an error about missing artifacts or manifests, run dbt parse first:
dbt parse && mf validate-configsThe dbt parse command generates target/semantic_manifest.json, which MetricFlow needs to understand your project’s semantic models and metrics. You’ll run dbt parse any time you change YAML definitions and want MetricFlow to pick up those changes.
A successful validation output looks something like:
Validating semantic model orders...Validating metric revenue...Validating metric orders...All validations passed.With no semantic models or metrics defined yet, validation will return nothing to validate — that’s fine. The absence of errors means the installation is working. Add your first semantic model and metric, re-run validation, and you’ll see it validating the new definitions.