ServicesAboutNotesContact Get in touch →
EN FR
Note

MetricFlow installation and setup

Installing MetricFlow for dbt Core with adapter-specific packages, the dbt Cloud alternative, and the initial project configuration steps needed before defining semantic models.

Planted
dbtdata engineeringanalytics

MetricFlow is the SQL generation engine behind the dbt semantic layer. In dbt Core, it’s a separate Python package you install alongside your adapter. In dbt Cloud, it’s built in.

dbt Core installation

Install the MetricFlow bundle that matches your data platform. The package name follows the pattern dbt-metricflow[adapter]:

Terminal window
# Snowflake
pip install "dbt-metricflow[dbt-snowflake]"
# BigQuery
pip install "dbt-metricflow[dbt-bigquery]"
# Databricks
pip install "dbt-metricflow[dbt-databricks]"
# Redshift
pip install "dbt-metricflow[dbt-redshift]"
# Postgres
pip install "dbt-metricflow[dbt-postgres]"

The brackets specify the adapter as an optional dependency. Installing dbt-metricflow without an adapter would leave you without a way to execute queries against your warehouse.

MetricFlow supports Snowflake, BigQuery, Databricks, Redshift, Postgres (Core only), and Trino. If your adapter isn’t in this list, MetricFlow cannot generate SQL for it — you’d need dbt Cloud, which has its own platform support matrix.

Pin the version in your requirements.txt alongside your adapter version:

dbt-metricflow[dbt-snowflake]==1.x.x

The MetricFlow package bundles a compatible adapter version. If you have adapter version conflicts in your environment, installing dbt-metricflow first and letting it resolve the adapter version is usually cleaner than trying to pin both independently.

dbt Cloud

No installation needed. MetricFlow is part of the dbt Cloud runtime. The CLI command is dbt sl instead of mf:

Terminal window
# Core
mf validate-configs
mf query --metrics revenue --group-by metric_time
# Cloud
dbt sl validate
dbt sl query --metrics revenue --group-by metric_time

The semantic model and metric YAML syntax is identical between Core and Cloud. The difference is in what happens after you define your metrics — Cloud adds JDBC, GraphQL, and REST API access that lets BI tools query the semantic layer directly. Core stays at CLI querying only.

See dbt Core vs Cloud Decision Framework for a full comparison of what’s available in each.

Required project structure

Two things need to be in place before you can define semantic models.

First, MetricFlow needs to know where your semantic model files live. In dbt_project.yml, configure the semantic-layer block:

semantic-layer:
time_spine:
standard_granularity_column: date_day

Second, create the time spine model. Cumulative metrics and time series gap filling both require a continuous date table. Without it, queries involving those features will fail. Creating it early avoids confusion later when a cumulative metric errors with no obvious message about why.

The minimal viable setup is:

  1. Install the package
  2. Add the semantic-layer config to dbt_project.yml
  3. Create metricflow_time_spine.sql
  4. Run dbt build -s metricflow_time_spine

After that, mf validate-configs should pass (assuming your semantic model YAMLs parse correctly), and mf query should work.

Verifying the installation

Run validation immediately after setup:

Terminal window
mf validate-configs

If this returns an error about missing artifacts or manifests, run dbt parse first:

Terminal window
dbt parse && mf validate-configs

The dbt parse command generates target/semantic_manifest.json, which MetricFlow needs to understand your project’s semantic models and metrics. You’ll run dbt parse any time you change YAML definitions and want MetricFlow to pick up those changes.

A successful validation output looks something like:

Validating semantic model orders...
Validating metric revenue...
Validating metric orders...
All validations passed.

With no semantic models or metrics defined yet, validation will return nothing to validate — that’s fine. The absence of errors means the installation is working. Add your first semantic model and metric, re-run validation, and you’ll see it validating the new definitions.