Adrienne Vermorel

BigQuery MCP Server: Complete Setup Guide

I wanted my AI assistant to query BigQuery directly. No copy-pasting SQL results, no context-switching between the GCP console and Claude. Google now offers two official ways to do this, both using the Model Context Protocol (MCP).

This guide covers both: the managed Remote MCP Server and the self-hosted MCP Toolbox for Databases. I tested both with Claude Desktop and Claude Code, and found important differences in setup complexity and authentication handling.

For Claude Code users: the bq CLI often works better than MCP for interactive use. Claude can run bq query commands directly with no setup. MCP adds value for custom parameterized queries, multi-database workflows, or when you need structured audit trails. This guide helps you decide which fits your use case.

Two Official Options

The Remote MCP Server runs entirely on Google’s infrastructure. You enable it via gcloud, configure your client, and start querying. No servers to manage. However, it requires manual token management with Claude Code (tokens expire hourly), making it impractical for regular use outside Claude Desktop.

The MCP Toolbox is an open-source binary you run locally or deploy yourself. It offers more tools, supports multiple databases beyond BigQuery, and lets you define custom parameterized queries. Authentication uses Application Default Credentials (ADC), which works more smoothly but requires understanding the difference between ADC and regular gcloud auth.

For Claude Desktop, the Remote Server works well for basic exploration. For Claude Code, the Toolbox is the practical choice.

Option 1: Remote MCP Server

The Remote Server exposes BigQuery through a single endpoint: https://bigquery.googleapis.com/mcp. No local installation required.

It provides five tools: list_dataset_ids, get_dataset_info, list_table_ids, get_table_info, and execute_sql. These cover the basics: the AI assistant can discover your datasets, examine table schemas, and run SQL queries.

Setup

Enable the MCP service for your Google Cloud project:

Terminal window
gcloud beta services mcp enable bigquery.googleapis.com \
--project=PROJECT_ID

Replace PROJECT_ID with your actual project ID.

Required IAM Roles

The user or service account needs roles/bigquery.user (to run queries and list datasets) and roles/bigquery.dataViewer (to read table data).

For production use with a service account:

Terminal window
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:SA_EMAIL" \
--role="roles/bigquery.user"
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:SA_EMAIL" \
--role="roles/bigquery.dataViewer"

Claude Desktop Configuration

For Claude Desktop, add the Remote Server to your config file at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
"mcpServers": {
"bigquery-remote": {
"url": "https://bigquery.googleapis.com/mcp",
"env": {
"GOOGLE_CLOUD_PROJECT": "your-project-id"
}
}
}
}

Claude Desktop handles OAuth automatically with this configuration.

Claude Code Configuration

Claude Code requires manual token management for the Remote MCP Server. The configuration syntax differs from Claude Desktop. Add to your .mcp.json:

{
"mcpServers": {
"bigquery": {
"type": "http",
"url": "https://bigquery.googleapis.com/mcp",
"headers": {
"Authorization": "Bearer <ACCESS_TOKEN>",
"x-goog-user-project": "your-project-id"
}
}
}
}

Get your access token with:

Terminal window
gcloud auth print-access-token

The token expires in about an hour. You must manually refresh it and restart Claude Code when it expires. This makes the Remote MCP Server impractical for regular use with Claude Code. For Claude Code, skip to Option 2 (the Toolbox), which handles authentication more gracefully.

Option 2: MCP Toolbox

The MCP Toolbox is open source (https://github.com/googleapis/genai-toolbox, Apache 2.0) and runs as a local binary. As of v0.22.0, it is still in beta.

Beyond the basic tools from the Remote Server, the Toolbox adds bigquery-forecast (time series forecasting via BigQuery ML), bigquery-conversational-analytics, bigquery-search-catalog (Data Catalog integration), and crucially bigquery-sql for executing pre-defined parameterized queries. That last one is useful when you want to give the AI assistant structured access to your data without allowing arbitrary SQL.

Installing the Toolbox

Download the binary, configure authentication, and set environment variables.

macOS (Apple Silicon):

Terminal window
export VERSION=0.22.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox

macOS (Intel):

Terminal window
export VERSION=0.22.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox

Linux (AMD64):

Terminal window
export VERSION=0.22.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox

Via Homebrew:

Terminal window
brew install mcp-toolbox

Via NPX (no local install):

Terminal window
npx -y @toolbox-sdk/server --prebuilt bigquery --stdio

NPX is convenient for testing but adds startup latency. For production, download the binary.

Authentication

The Toolbox uses Application Default Credentials (ADC), not your regular gcloud auth login credentials. This is a common source of confusion.

Regular gcloud auth login authenticates gcloud CLI commands. ADC is a separate credential used by applications and SDKs. You need both:

Terminal window
# This authenticates gcloud CLI commands (you probably already have this)
gcloud auth login
# This sets up ADC, which the MCP server uses
gcloud auth application-default login

The second command opens a browser for OAuth consent. Once complete, credentials are stored at ~/.config/gcloud/application_default_credentials.json.

For production, use a service account key file and set GOOGLE_APPLICATION_CREDENTIALS:

Terminal window
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"

You also need to set the project:

Terminal window
export BIGQUERY_PROJECT="your-project-id"

Then run the server:

Terminal window
./toolbox --prebuilt bigquery --stdio

The --stdio flag runs in stdio mode, which MCP clients like Claude Desktop expect.

Important: After updating credentials, you must restart Claude Code or Claude Desktop. The MCP server reads credentials at startup and does not detect changes while running.

Configuration Examples

Claude Desktop with Local Binary

{
"mcpServers": {
"bigquery": {
"command": "/Users/yourname/tools/toolbox",
"args": ["--prebuilt", "bigquery", "--stdio"],
"env": {
"BIGQUERY_PROJECT": "your-project-id"
}
}
}
}

Replace the path with wherever you put the binary.

NPX-based Configuration

{
"mcpServers": {
"bigquery": {
"command": "npx",
"args": ["-y", "@toolbox-sdk/server", "--prebuilt", "bigquery", "--stdio"],
"env": {
"BIGQUERY_PROJECT": "your-project-id"
}
}
}
}

The -y flag auto-confirms the NPX download.

Claude Code Configuration

Terminal window
claude mcp add bigquery -- npx -y @toolbox-sdk/server --prebuilt bigquery --stdio

Then edit to add the environment variable:

Terminal window
claude mcp edit bigquery

Add the BIGQUERY_PROJECT variable in the editor that opens.

For team projects, you can create a project-scoped configuration:

Terminal window
claude mcp add bigquery -s project -- npx -y @toolbox-sdk/server --prebuilt bigquery --stdio

This creates a .mcp.json file in your project root that you can commit.

Custom Tools

The Toolbox supports custom tool definitions via a tools.yaml file. For data teams, this is the most useful feature: you can define parameterized queries that constrain what the AI can run while still allowing flexibility in the parameters.

sources:
my-bigquery-source:
kind: bigquery
project: your-project-id
tools:
search_orders_by_date:
kind: bigquery-sql
source: my-bigquery-source
description: Search orders within a date range
statement: |
SELECT
order__id,
order__customer_id,
order__total_amount_usd,
order__created_at
FROM `your-project.ecommerce.mrt__sales__orders`
WHERE order__created_at BETWEEN @start_date AND @end_date
ORDER BY order__created_at DESC
LIMIT 100
parameters:
- name: start_date
type: string
description: Start date (YYYY-MM-DD)
- name: end_date
type: string
description: End date (YYYY-MM-DD)
get_customer_metrics:
kind: bigquery-sql
source: my-bigquery-source
description: Get aggregated metrics for a customer
statement: |
SELECT
customer__id,
customer__orders,
customer__lifetime_value_usd,
customer__last_ordered_at
FROM `your-project.ecommerce.mrt__sales__customers`
WHERE customer__id = @customer_id
parameters:
- name: customer_id
type: string
description: The customer ID to look up

Run with your custom config:

Terminal window
./toolbox --tools tools.yaml --stdio

This approach controls exactly which queries the AI can run and which parameters it can vary. Much safer than arbitrary SQL execution.

Service Account Setup

For production, create a dedicated service account with minimal permissions:

Terminal window
# Create service account
gcloud iam service-accounts create bigquery-mcp \
--display-name="BigQuery MCP Service Account"
# Grant roles
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:bigquery-mcp@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/bigquery.user"
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:bigquery-mcp@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/bigquery.dataViewer"
# Create key file
gcloud iam service-accounts keys create bigquery-mcp-key.json \
--iam-account=bigquery-mcp@PROJECT_ID.iam.gserviceaccount.com

Then set GOOGLE_APPLICATION_CREDENTIALS to point to the key file. The key should be rotated periodically.

For user-facing applications where queries should run with the end user’s permissions, you can configure OAuth 2.0 via Google’s Agent Development Kit. That is more complex to set up but gives you per-user audit trails.

What You Can Do With It

Once configured, interacting with BigQuery happens through natural language.

Asking “list all datasets in my project” triggers list_dataset_ids. “Show me the schema of the customers table in the sales dataset” calls get_table_info. “What are the top 10 orders by revenue this month?” prompts the assistant to write and execute SQL.

I find it most useful for quick data quality checks. “Count null values in the email column” or “find duplicate customer_id values” or “are there any orders with negative amounts?” are the kinds of questions where writing the SQL is not hard, but the context-switching adds up. Having the assistant handle it while I stay in my editor is noticeably faster.

Ad-hoc reporting works too: “monthly sales summary for 2025 by product category” or “compare Q4 2025 revenue to Q4 2024 by region”. The assistant writes the SQL, runs it, and formats the results.

Troubleshooting

Silent failures on list tools

When permissions are missing, list_dataset_ids and similar tools may return empty results with no error message. The execute_sql tool returns a clear 403 error, but list operations fail silently.

If you get empty results when you expect data:

  1. Verify ADC is set up: gcloud auth application-default print-access-token should return a token
  2. Check the project: echo $BIGQUERY_PROJECT should show your project ID
  3. Test with execute_sql instead, which gives clearer errors

Authentication errors after gcloud auth login

If you ran gcloud auth login but still get permission errors, you probably need ADC:

Terminal window
gcloud auth application-default login

Then restart Claude Code or Claude Desktop.

Credentials not picked up after changes

The MCP server reads credentials at startup. If you update credentials while Claude is running, restart the application. With Claude Code, you may need to restart twice (once to pick up the new config, once for the new credentials).

Costs and Gotchas

Both options are still in preview/beta, so expect breaking changes. Pin to a specific version in production.

The bigger concern is query costs. Every execute_sql call runs a BigQuery query, and costs depend on data scanned. A curious AI asking broad questions about large tables can rack up charges. Mitigate with custom parameterized queries that include LIMIT clauses, restrict access to specific datasets, and set up BigQuery cost controls.

On permissions: execute_sql can technically run INSERT, UPDATE, DELETE, and DDL statements. For read-only access, grant only roles/bigquery.dataViewer or use custom bigquery-sql tools with read-only queries. Avoid roles/bigquery.dataEditor.

Other things to watch: long-running queries may timeout, large result sets get truncated, and you can hit BigQuery’s concurrent query limits.

When to Use MCP vs CLI

Before committing to MCP setup, consider whether the bq CLI might be simpler. Claude Code can run bq query, bq ls, and bq show commands directly with no configuration. The CLI uses your regular gcloud credentials, requires no server restarts after auth changes, and gives clearer error messages.

MCP adds value in specific scenarios:

Custom parameterized queries: The Toolbox’s tools.yaml lets you define exactly which queries the AI can run with which parameters. This is safer than arbitrary SQL and useful when building AI workflows that should only access specific data patterns.

Multi-database workflows: If you work across BigQuery, Cloud SQL, Spanner, or other databases, the Toolbox provides a unified interface rather than switching between CLIs.

Structured tool outputs: MCP returns structured data that Claude can parse more reliably than CLI output. For automated pipelines where you need predictable response formats, MCP reduces parsing errors.

Audit requirements: Every MCP tool call creates a structured log entry. If compliance requires audit trails of AI data access, MCP provides this by default.

For interactive exploration, the CLI is often faster to set up and use. For automated workflows or constrained access patterns, MCP is worth the configuration overhead.

Which MCP Option

If you decide MCP fits your use case:

For Claude Desktop: Start with the Remote Server. It works with minimal configuration and covers most exploration needs.

For Claude Code: Use the Toolbox. The Remote Server’s token expiration makes it impractical for regular use.

For teams: The Toolbox with a shared tools.yaml lets you encode common queries and business logic. This gives AI assistants structured access to your data without arbitrary SQL risks.