The dbt MCP server exposes tools across four categories. Which tools are available depends on your deployment mode (local vs remote) and which credentials you’ve configured. This reference covers all of them, with notes on when each is useful in practice.
dbt CLI Commands (Local Server Only)
These tools wrap dbt CLI commands. They execute against your local dbt installation and project, which means they have the same capabilities — and the same risks — as running dbt from a terminal. See dbt MCP Server Safety Considerations for the production implications.
| Tool | What It Does | When to Use It |
|---|---|---|
build | Executes models, tests, snapshots, and seeds in dependency order | The all-in-one command. Use when you want Claude to run everything for a selection. |
compile | Generates executable SQL from models without running them | Inspecting what dbt will actually send to the warehouse. Essential for debugging Jinja logic. |
docs | Generates documentation for the project | Producing or refreshing dbt docs artifacts. |
ls | Lists resources in the project | Discovery. “What models exist in the staging layer?” |
run | Executes models to materialize them in the database | Materializing specific models. More targeted than build because it skips tests. |
test | Runs tests to validate data and model integrity | Running tests after changes, or checking a model’s current test status. |
show | Runs a query against the data warehouse | Ad-hoc exploration. “Show me 10 rows from this model.” |
Example Prompts for CLI Tools
These are the kinds of questions that trigger CLI tool usage:
Compile the SQL for stg__shopify__orders and show me the result.The AI calls compile with the model selector, then presents the generated SQL. This is invaluable for understanding how Jinja macros, ref() calls, and config blocks translate to actual warehouse SQL.
Run tests on the base models.The AI calls test with a selector targeting your base layer. You see pass/fail results without switching to a terminal.
What errors occurred in the last failed run?The AI may combine ls to find recent models with show to query metadata, depending on your setup.
Metadata Discovery Tools
Available on both local and remote servers. These read your dbt project’s manifest and catalog to provide structural information about your models.
| Tool | What It Does | When to Use It |
|---|---|---|
get_mart_models | Gets all mart (presentation layer) models | Finding what your consumers see. The “What reports can we build?” question. |
get_all_models | Gets all models in the project | Full inventory. Useful for auditing or understanding project scope. |
get_model_details | Gets SQL, description, and columns for a specific model | Deep inspection. “What columns does this model have? What’s the description?” |
get_model_parents | Gets upstream dependencies for a model | Tracing where a model’s data comes from. |
get_model_children | Gets downstream dependencies for a model | Understanding impact. “What breaks if I change this model?” |
get_lineage | Gets complete lineage with configurable depth | Full dependency visualization. Configurable depth means you can see just immediate parents/children or the entire chain. |
get_all_sources | Gets source tables with freshness information | Checking what external data is flowing in and whether it’s current. |
Example Prompts for Metadata Tools
What models do we have in the marts layer?The AI calls get_mart_models and returns a list with descriptions. This is the data discovery use case — understanding what’s available without digging through files.
Show me the lineage for mrt__sales__orders.The AI calls get_lineage with a depth parameter and presents the upstream/downstream dependency chain. This connects to layer conventions — you can see how data flows from sources through staging and intermediate models to marts.
What models depend on base__shopify__orders?The AI calls get_model_children and shows downstream dependencies. Essential for impact analysis before changing a widely-used model.
Semantic Layer Tools (Requires dbt Cloud)
These tools query dbt’s MetricFlow Semantic Layer. They require dbt Cloud credentials (DBT_HOST, DBT_TOKEN, DBT_PROD_ENV_ID) regardless of whether you’re using the local or remote server.
| Tool | What It Does | When to Use It |
|---|---|---|
list_metrics | Retrieves all defined metrics | Discovering what [[Metrics as Code |
get_dimensions | Gets dimensions for specified metrics | Understanding what dimensions you can group by for a given metric. |
query_metrics | Query metrics with grouping, ordering, and filtering | Actually pulling metric values. The most-used Semantic Layer tool. |
Example Prompts for Semantic Layer Tools
Query monthly revenue by region for last quarter.The AI calls query_metrics with the revenue metric, a time grain of month, a region dimension, and a time filter. The Semantic Layer handles the SQL generation using the governed metric definition — no risk of the AI calculating revenue differently than your finance team.
What dimensions are available for the customer_lifetime_value metric?The AI calls get_dimensions and returns the list of dimensions you can slice this metric by. This is useful when exploring what analysis is possible with your current metric definitions.
Admin API Tools (dbt Cloud)
These tools manage dbt Cloud jobs. They require a service token with Job Admin permissions.
| Tool | What It Does | When to Use It |
|---|---|---|
list_jobs | Lists all jobs in the project | Seeing what scheduled or manual jobs exist. |
trigger_job_run | Starts a job execution | Kicking off a refresh. “Run the daily load.” |
get_job_run_details | Gets status and details of a job run | Checking whether a run is in progress, succeeded, or failed. |
cancel_job_run | Cancels a running job | Stopping a run that’s consuming resources or hitting errors. |
retry_job_run | Retries a failed job run | Restarting after a transient failure. |
get_job_run_error | Gets error details from a failed run | Diagnosing what went wrong in a production run. |
Example Prompts for Admin API Tools
List all scheduled jobs in the project.Trigger the daily refresh job.What's the status of the currently running job?These tools are most useful for operational workflows — checking production pipeline health through conversation rather than navigating the dbt Cloud UI.
Token Overhead Considerations
Twenty-plus tool definitions loaded into your AI’s context window represents a meaningful token cost. If you only need metadata discovery, consider using feature toggles (DISABLE_DBT_CLI=true, DISABLE_SEMANTIC_LAYER=true) in your configuration to reduce the number of tools loaded. Fewer tools means more context window available for your actual conversation.