The dbt MCP server comes in two deployment modes — local and remote — with fundamentally different capabilities.
Local Server
The local server runs on your machine. It spawns a dbt process alongside your existing installation and communicates through stdio transport. You need three things: the uv package manager, a dbt installation (Core, Fusion Engine, or Cloud CLI all work), and a project directory containing dbt_project.yml.
What you get:
- Full CLI access.
run,build,test,compile,docs,ls,show— every dbt command you’d run in a terminal, accessible through conversation. - Metadata discovery. Browse models, inspect columns, trace lineage upstream and downstream. No cloud account needed for this.
- Semantic Layer queries (optional). If you add dbt Cloud credentials, the local server can also query metrics through MetricFlow. But this is additive — everything else works without Cloud.
- Job management (optional). Same deal: add Cloud credentials and you can list, trigger, cancel, and retry dbt Cloud jobs.
The local server is a superset. It starts with CLI + metadata, and grows with optional Cloud features.
No dbt Cloud account is required for the base functionality. This is the important detail that gets lost in dbt Labs’ documentation. If you’re running dbt Core on your laptop against a warehouse, the local server works fine. You just won’t have Semantic Layer or job management.
Remote Server
The remote server is hosted by dbt Labs. No installation, no uv, no local dbt. You configure your MCP client to connect to dbt Cloud’s endpoint, authenticate with a service token, and you’re in.
What you get:
- Read-only metadata. Model details, lineage, source freshness — the same discovery tools as the local server.
- Semantic Layer queries. Metrics, dimensions, filtering — the MetricFlow features.
What you don’t get:
- No CLI commands. No
run,build,test,compile. The remote server cannot execute dbt against your warehouse. It’s purely a metadata and metrics interface.
The Decision
Data engineers typically want the local server, which provides CLI access — the ability to compile SQL, run tests, and build specific models through conversation. Without CLI access, the server is limited to metadata reading.
The remote server suits analysts and stakeholders who need to query the Semantic Layer without touching the codebase — getting answers from governed metric definitions without CLI access.
| Local Server | Remote Server | |
|---|---|---|
| Installation | Requires uv, dbt, project directory | None |
| dbt Cloud required | No (optional for Semantic Layer) | Yes |
| CLI commands | Full access | None |
| Metadata discovery | Yes | Yes |
| Semantic Layer | With Cloud credentials | Yes |
| Job management | With Cloud credentials | No |
| Best for | Data engineers, analytics engineers | Analysts, stakeholders |
Both at Once
Nothing stops you from running both. A data engineer might use the local server for development — compiling SQL, running tests, debugging models — while their organization also deploys the remote server for analysts who need Semantic Layer access. The servers don’t conflict because they serve different purposes through different transport mechanisms.
The local server uses stdio transport (spawned as a subprocess), while the remote server uses HTTP transport (connecting to dbt Cloud’s hosted endpoint). They coexist naturally as separate entries in your MCP client configuration.
The Experimental Caveat
dbt Labs marks both servers as experimental. The tool surface has changed between releases, and the #tools-dbt-mcp channel in dbt Community Slack is where breaking changes get announced. This is worth factoring into how much you build on top of the server — keep your workflows adaptable rather than deeply coupled to specific tool names or behaviors.