ServicesAboutNotesContact Get in touch →
EN FR
Note

dbt MCP Server Tool Reference

Complete reference for the 20+ tools exposed by the dbt MCP server — CLI commands, metadata discovery, Semantic Layer queries, and job management.

Planted
mcpdbtaidata engineering

The dbt MCP server exposes tools across four categories. Which tools are available depends on your deployment mode (local vs remote) and which credentials you’ve configured. This reference covers all of them, with notes on when each is useful in practice.

dbt CLI Commands (Local Server Only)

These tools wrap dbt CLI commands. They execute against your local dbt installation and project, which means they have the same capabilities — and the same risks — as running dbt from a terminal. See dbt MCP Server Safety Considerations for the production implications.

ToolWhat It DoesWhen to Use It
buildExecutes models, tests, snapshots, and seeds in dependency orderThe all-in-one command. Use when you want Claude to run everything for a selection.
compileGenerates executable SQL from models without running themInspecting what dbt will actually send to the warehouse. Essential for debugging Jinja logic.
docsGenerates documentation for the projectProducing or refreshing dbt docs artifacts.
lsLists resources in the projectDiscovery. “What models exist in the staging layer?”
runExecutes models to materialize them in the databaseMaterializing specific models. More targeted than build because it skips tests.
testRuns tests to validate data and model integrityRunning tests after changes, or checking a model’s current test status.
showRuns a query against the data warehouseAd-hoc exploration. “Show me 10 rows from this model.”

Example Prompts for CLI Tools

These are the kinds of questions that trigger CLI tool usage:

Compile the SQL for stg__shopify__orders and show me the result.

The AI calls compile with the model selector, then presents the generated SQL. This is invaluable for understanding how Jinja macros, ref() calls, and config blocks translate to actual warehouse SQL.

Run tests on the base models.

The AI calls test with a selector targeting your base layer. You see pass/fail results without switching to a terminal.

What errors occurred in the last failed run?

The AI may combine ls to find recent models with show to query metadata, depending on your setup.

Metadata Discovery Tools

Available on both local and remote servers. These read your dbt project’s manifest and catalog to provide structural information about your models.

ToolWhat It DoesWhen to Use It
get_mart_modelsGets all mart (presentation layer) modelsFinding what your consumers see. The “What reports can we build?” question.
get_all_modelsGets all models in the projectFull inventory. Useful for auditing or understanding project scope.
get_model_detailsGets SQL, description, and columns for a specific modelDeep inspection. “What columns does this model have? What’s the description?”
get_model_parentsGets upstream dependencies for a modelTracing where a model’s data comes from.
get_model_childrenGets downstream dependencies for a modelUnderstanding impact. “What breaks if I change this model?”
get_lineageGets complete lineage with configurable depthFull dependency visualization. Configurable depth means you can see just immediate parents/children or the entire chain.
get_all_sourcesGets source tables with freshness informationChecking what external data is flowing in and whether it’s current.

Example Prompts for Metadata Tools

What models do we have in the marts layer?

The AI calls get_mart_models and returns a list with descriptions. This is the data discovery use case — understanding what’s available without digging through files.

Show me the lineage for mrt__sales__orders.

The AI calls get_lineage with a depth parameter and presents the upstream/downstream dependency chain. This connects to layer conventions — you can see how data flows from sources through staging and intermediate models to marts.

What models depend on base__shopify__orders?

The AI calls get_model_children and shows downstream dependencies. Essential for impact analysis before changing a widely-used model.

Semantic Layer Tools (Requires dbt Cloud)

These tools query dbt’s MetricFlow Semantic Layer. They require dbt Cloud credentials (DBT_HOST, DBT_TOKEN, DBT_PROD_ENV_ID) regardless of whether you’re using the local or remote server.

ToolWhat It DoesWhen to Use It
list_metricsRetrieves all defined metricsDiscovering what [[Metrics as Code
get_dimensionsGets dimensions for specified metricsUnderstanding what dimensions you can group by for a given metric.
query_metricsQuery metrics with grouping, ordering, and filteringActually pulling metric values. The most-used Semantic Layer tool.

Example Prompts for Semantic Layer Tools

Query monthly revenue by region for last quarter.

The AI calls query_metrics with the revenue metric, a time grain of month, a region dimension, and a time filter. The Semantic Layer handles the SQL generation using the governed metric definition — no risk of the AI calculating revenue differently than your finance team.

What dimensions are available for the customer_lifetime_value metric?

The AI calls get_dimensions and returns the list of dimensions you can slice this metric by. This is useful when exploring what analysis is possible with your current metric definitions.

Admin API Tools (dbt Cloud)

These tools manage dbt Cloud jobs. They require a service token with Job Admin permissions.

ToolWhat It DoesWhen to Use It
list_jobsLists all jobs in the projectSeeing what scheduled or manual jobs exist.
trigger_job_runStarts a job executionKicking off a refresh. “Run the daily load.”
get_job_run_detailsGets status and details of a job runChecking whether a run is in progress, succeeded, or failed.
cancel_job_runCancels a running jobStopping a run that’s consuming resources or hitting errors.
retry_job_runRetries a failed job runRestarting after a transient failure.
get_job_run_errorGets error details from a failed runDiagnosing what went wrong in a production run.

Example Prompts for Admin API Tools

List all scheduled jobs in the project.
Trigger the daily refresh job.
What's the status of the currently running job?

These tools are most useful for operational workflows — checking production pipeline health through conversation rather than navigating the dbt Cloud UI.

Token Overhead Considerations

Twenty-plus tool definitions loaded into your AI’s context window represents a meaningful token cost. If you only need metadata discovery, consider using feature toggles (DISABLE_DBT_CLI=true, DISABLE_SEMANTIC_LAYER=true) in your configuration to reduce the number of tools loaded. Fewer tools means more context window available for your actual conversation.