Adrienne Vermorel

Connecting Claude Code to Your Data Warehouse (And Why You Might Not Need MCP)

It turns out we’ve all been using MCP wrong.

That’s not my claim. It’s Cloudflare’s, from their September 2025 research into AI agent efficiency. Their finding: when they asked LLMs to write code that calls APIs instead of generating MCP tool calls directly, token usage dropped by 81%. Anthropic independently validated this a few weeks later, reporting a 98.7% reduction in a Google Drive-to-Salesforce workflow.

For those of us working with BigQuery and GCP every day, this research has important implications. MCP (Model Context Protocol) has been the hot new standard for connecting AI agents to external tools. Google launched managed MCP servers for BigQuery in December 2025. The ecosystem is expanding rapidly. So why are two major players suggesting we might be better off writing shell commands?

The answer lies in training data, and it suggests that the bq and gcloud commands you’ve used for years aren’t legacy tools to be abstracted away. They might be Claude Code’s most powerful interface to your data warehouse.


Why LLMs Speak Bash Better Than JSON Tool Schemas

The Training Data Asymmetry

Large language models learn from what they’ve seen. Claude, ChatGPT, and their peers have ingested millions of GitHub repositories, Stack Overflow answers, blog posts, and documentation. The bq command-line tool has existed since 2011. gcloud launched in 2013. That’s over a decade of real-world usage examples: troubleshooting threads, tutorial code, production scripts, CI/CD configurations.

MCP tool-calling, by contrast, requires models to generate structured JSON matching specific schemas (formats they’ve encountered primarily in synthetic training data created specifically to teach tool use). Cloudflare’s engineering team put it bluntly:

Making an LLM perform tasks with tool calling is like putting Shakespeare through a month-long class in Mandarin and then asking him to write a play in it. It’s just not going to be his best work.

The asymmetry is stark. When Claude generates a bq query command, it’s drawing on patterns it’s seen thousands of times. When it generates an MCP tool call, it’s working from a much smaller, more artificial corpus.

The Benchmark Evidence

Cloudflare’s “Code Mode” experiment tested this hypothesis directly. They asked agents to create 31 calendar events: once using traditional MCP tool-calling, once by writing TypeScript code against the same APIs.

The Code Mode approach used 81% fewer tokens. But the more revealing result was qualitative: the code-writing agent correctly called new Date() to determine the current date before creating events. The tool-calling agent, lacking that capability, created all 31 events a year in the past.

Anthropic’s research pushed further. Their test case (downloading a document from Google Drive and attaching it to a Salesforce record) required 150,000 tokens with traditional MCP. With code execution, it dropped to 2,000 tokens. That’s a 98.7% reduction.

The key insight explains why: with traditional MCP, every intermediate result passes through the model’s context window. Download a file? The model sees the full response. Parse metadata? Back through the context. Attach to Salesforce? Another round trip. With code execution, data stays in a sandbox. The model only receives the final result it actually needs.

For BigQuery workflows, this principle extends naturally to CLI commands. Claude doesn’t need to parse JSON tool schemas to understand bq query --nouse_legacy_sql 'SELECT * FROM dataset.table LIMIT 10'. It’s seen that pattern countless times.


The MCP Token Tax

Before diving into BigQuery specifics, let’s quantify what MCP actually costs in context window space.

The Upfront Burden

Every MCP server you connect loads its tool definitions into Claude’s context before your conversation begins. Anthropic’s internal testing found that 58 tools across 5 MCP servers consumed approximately 55,000 tokens, before a single user message.

In extreme configurations, tool definitions alone ate 134,000 tokens. That’s 67% of Claude’s 200K context window gone before you ask your first question.

Real-world Claude Code measurements tell a similar story. One user documented their setup with 7 MCP servers:

ComponentTokens% of Context
System prompt2,7001.4%
Built-in tools14,4007.2%
MCP tools67,30033.7%
Total before conversation84,40042.2%

Even after aggressive optimization (cutting to 3 essential servers), they still consumed 42,600 tokens (21.3% of context). For data engineering workflows requiring BigQuery queries, GCS operations, dbt commands, and pipeline management, the overhead compounds quickly.

CLI Efficiency by Comparison

A moderately complex bq query command consumes 15-30 tokens. The JSON schema definition for a single MCP tool parameter often exceeds that. When you’re running dozens of operations in a session (exploring schemas, profiling data, iterating on transformations), the difference is orders of magnitude, not percentages.

This isn’t an argument against MCP’s existence. It’s an argument for understanding its costs and choosing the right tool for each job.


The BigQuery MCP Landscape

To make an informed decision, you need to know what’s actually available. Google has invested significantly in MCP infrastructure for BigQuery, and the ecosystem has matured rapidly.

Official Google Options

BigQuery Remote MCP Server (Managed, Preview as of January 2026)

Google’s fully hosted option requires no local installation. Authentication uses OAuth 2.0 combined with IAM permissions, and the server runs entirely on Google’s infrastructure. Setup is minimal:

Terminal window
gcloud beta services mcp enable bigquery.googleapis.com

The managed server exposes tools for querying, schema exploration, and an ask_data_insights capability for conversational analytics.

MCP Toolbox for Databases (Local, Open Source)

For teams preferring local control, Google’s toolbox provides a single Go binary supporting BigQuery alongside 30+ other databases (AlloyDB, Cloud SQL, Spanner, PostgreSQL, MySQL). Version 0.24.0 includes 10 pre-built BigQuery tools:

  • execute_sql: Run queries with parameterized inputs
  • list_dataset_ids: Enumerate datasets in a project
  • list_table_ids: List tables within a dataset
  • get_table_info: Retrieve schema and metadata
  • ask_data_insights: Natural language data exploration

Configuration lives in a tools.yaml file:

sources:
my-bigquery-source:
kind: bigquery
project: your-project-id
location: EU

Token consumption for the full BigQuery toolset runs approximately 2,000-5,000 tokens (substantial but manageable).

Community Alternatives

Several community servers optimize for specific use cases:

ergut/mcp-bigquery-server provides read-only access with a configurable query limit (default 1GB), useful guardrails for cost control in exploratory workflows.

pvoo’s bigquery-mcp claims 70% fewer tokens in “basic mode” by returning only names rather than full metadata during schema exploration.

caron14’s server uniquely includes --dry_run support for cost estimation before query execution.

All options use Application Default Credentials, so authentication works identically to CLI tools. If gcloud auth application-default login works, so does MCP.


What bq CLI Does That MCP Cannot

Here’s where the comparison gets concrete. Current BigQuery MCP implementations focus almost exclusively on querying and schema exploration. Many data engineering staples simply don’t exist:

Capabilitybq CLIMCP Servers
Run queries✅ Full SQL supportexecute_sql
List datasets/tablesbq lslist_dataset_ids, list_table_ids
Show schemabq show --schemaget_table_info
Load data from GCS or local filesbq load❌ Not available
Export to GCSbq extract❌ Not available
Copy tablesbq cp❌ Not available
Delete tables/datasetsbq rm❌ Not available
Cancel running jobsbq cancel⚠️ Limited
Dry run (cost estimation)--dry_run flag⚠️ Only caron14’s server
Partitioning/clustering✅ Full control❌ Not available
Authorized viewsbq update --view❌ Not available
Reservation managementbq mk --reservation❌ Not available

For production data engineering (where you regularly load files, export results, copy tables between datasets, and manage jobs), CLI provides complete coverage. MCP offers a subset focused on analytics.

A Real ETL Workflow

Consider a typical pipeline: ingest CSV files from Cloud Storage, transform them with SQL, export results for downstream consumption.

Terminal window
# Load raw data from GCS
bq load \
--source_format=CSV \
--autodetect \
--replace \
raw_data.daily_sales \
'gs://my-bucket/sales/2025-01-*.csv'
# Transform and materialize
bq query \
--destination_table=analytics.sales_summary \
--replace \
--nouse_legacy_sql \
'
SELECT
DATE_TRUNC(sale_date, MONTH) as month,
region,
SUM(amount) as total_sales,
COUNT(DISTINCT customer_id) as unique_customers
FROM raw_data.daily_sales
GROUP BY 1, 2
'
# Export for external consumption
bq extract \
--destination_format=CSV \
--compression=GZIP \
analytics.sales_summary \
'gs://my-bucket/exports/sales_summary_*.csv.gz'

The MCP toolbox handles the middle step beautifully. Everything else (the load, the export) requires CLI. If you’re building pipelines with Claude Code, you need bq regardless of whether you also use MCP.


Side-by-Side Workflow Comparison

Let’s compare approaches for a common analytics engineering task: exploring an unfamiliar dataset before building dbt models.

The Scenario

You’ve been given access to a new ecommerce dataset. You need to understand its structure: what tables exist, their schemas, row counts, and key relationships.

CLI Approach

Terminal window
# List all tables
bq ls --format=pretty ecommerce
# Get schema for the orders table
bq show --schema --format=prettyjson ecommerce.orders
# Quick profiling: row count and date range
bq query --nouse_legacy_sql '
SELECT
COUNT(*) as row_count,
MIN(created_at) as earliest,
MAX(created_at) as latest,
COUNT(DISTINCT customer_id) as unique_customers
FROM ecommerce.orders
'
# Check for nulls in key columns
bq query --nouse_legacy_sql '
SELECT
COUNTIF(customer_id IS NULL) as null_customers,
COUNTIF(total_amount IS NULL) as null_amounts,
COUNTIF(status IS NULL) as null_status
FROM ecommerce.orders
'

Claude generates these commands naturally, executes them in bash, and parses the output. The commands are compact; results stream directly.

MCP Approach

User: "Explore the ecommerce dataset - show me all tables with their schemas and basic profiling"
Claude:
1. Calls list_table_ids(dataset="ecommerce") → returns ["orders", "customers", "products", "order_items"]
2. Calls get_table_info(table="orders") → returns full schema JSON
3. Calls get_table_info(table="customers") → returns full schema JSON
4. Calls get_table_info(table="products") → returns full schema JSON
5. Calls get_table_info(table="order_items") → returns full schema JSON
6. Calls execute_sql(query="SELECT COUNT(*)...") for profiling
7. Returns formatted summary

The MCP approach provides structured, validated responses. But it loads tool definitions upfront and passes every result through context.

Token Comparison

ApproachTool Definition OverheadPer-Operation CostTotal (4 tables + profiling)
CLI0 tokens~40-60 tokens/command~300-400 tokens
MCP2,000-5,000 tokens~150-250 tokens/tool call~3,000-6,500 tokens

For straightforward schema exploration, CLI wins on efficiency by an order of magnitude. The gap narrows for complex multi-step workflows where MCP’s structured responses simplify downstream processing, but it never disappears.


When MCP Genuinely Wins

The “CLI is better” thesis has important exceptions. Dismissing MCP entirely would be as wrong as adopting it uncritically.

Enterprise Security and Compliance

MCP servers can run in sandboxed containers with restricted filesystem access. Credentials stay external, never exposed to the LLM. Every tool call produces an audit trail.

For teams under SOX, HIPAA, or SOC 2 requirements, this matters. Google’s documentation emphasizes that managed MCP servers let agents “query data without the security risks of moving data into context windows.”

One security-conscious practitioner recommended: “Use MCP servers rather than the BigQuery CLI to maintain better security control over what Claude Code can access, especially for handling sensitive data that requires logging or has potential privacy concerns.”

If your compliance team requires audit logs of every data access, MCP provides that by default. CLI commands in Claude Code would require additional instrumentation.

Complex Output Parsing

CLI tools often produce verbose, inconsistent output. Parsing bq show output reliably across different table types requires handling edge cases.

MCP servers preprocess results into clean structured data. One developer noted their Xcode MCP server parses “thousands of lines of build output and returns a clean, structured object: {errors: [...], warnings: []}”, far simpler than regex parsing of raw compiler output.

For BigQuery, this matters most when schemas are complex or when you need to programmatically process metadata. MCP’s get_table_info returns consistent JSON; bq show --schema returns JSON too, but wrapped in output you need to extract.

APIs Without CLI Equivalents

Many enterprise systems (Salesforce, HubSpot, Jira, Notion) only expose REST APIs. There’s no CLI to invoke.

MCP servers provide structured interfaces that dramatically improve LLM reliability. Asking Claude to generate raw curl commands with OAuth tokens is error-prone. An MCP server handles authentication and presents clean tool interfaces.

If your workflows integrate BigQuery with SaaS platforms, MCP provides the bridge CLI cannot.

Multi-Tool Orchestration

Google’s retail analytics demo connects BigQuery for revenue forecasting with Maps for location analysis. The agent queries sales data, identifies underperforming regions, and retrieves geographic context, coordinating across services.

For cross-platform workflows where data flows between systems, MCP’s structured handoffs reduce errors. CLI tools work in isolation; MCP servers can share context.

Natural Language Analytics for Business Users

The ask_data_insights tool in Google’s BigQuery toolbox enables conversational data exploration that would require complex prompt engineering with CLI alone.

If you’re building interfaces for business users who shouldn’t need to understand SQL, MCP’s natural language capabilities add genuine value. CLI assumes technical fluency MCP can abstract away.


The Practical Recommendation

For GCP/BigQuery workflows, a hybrid approach works best. Neither CLI nor MCP is universally superior; each excels in different contexts.

Use CLI When:

  • Data loading and export: bq load, bq extract, bq cp have no MCP equivalents
  • Quick exploration: Ad-hoc queries, schema checks, row counts
  • CI/CD pipelines: Portable, scriptable, no server dependencies
  • Full feature coverage: Partitioning, clustering, authorized views, reservations
  • Token efficiency matters: Cost-sensitive workloads, long conversations
  • Offline development: No server process to maintain

Use MCP When:

  • Security requirements: Audit trails, credential isolation, sandboxed execution
  • Business user interfaces: Natural language analytics without SQL
  • SaaS integrations: Systems without CLI equivalents
  • Structured responses: When downstream code needs predictable JSON
  • Multi-platform orchestration: Workflows spanning BigQuery + other services

Hybrid Configuration

Claude Code supports both approaches simultaneously. Configure MCP for tools that benefit from it while keeping CLI access for everything else:

{
"mcpServers": {
"bigquery": {
"command": "npx",
"args": ["-y", "@ergut/mcp-bigquery-server", "--project-id", "your-project-id"]
}
},
"permissions": {
"allow": [
"Bash(bq *)",
"Bash(gcloud *)",
"Bash(gsutil *)",
"Bash(dbt *)"
]
}
}

This configuration gives Claude access to MCP’s structured query interface while preserving CLI access for data loading, exports, and the full range of bq capabilities.

For teams prioritizing token efficiency, consider starting with CLI only and adding MCP servers selectively for specific use cases, rather than the reverse.


The Industry Trajectory

Cloudflare and Anthropic arrived at remarkably similar conclusions within weeks of each other in late 2025.

Cloudflare’s “Code Mode” converts MCP tool schemas into TypeScript APIs, then asks LLMs to write code against those interfaces. The model never sees raw tool definitions, just familiar TypeScript.

Anthropic’s “Code execution with MCP” takes a different implementation path but reaches the same destination: present tools as filesystem-accessible code modules that agents discover on demand. Write code, not tool calls.

Both approaches leverage the same insight: LLMs have trained on code, not on synthetic tool-calling formats. Meet them where they’re strongest.

For data engineers, this validates what many suspected. The CLI tools you’ve used for years aren’t legacy technology awaiting replacement. They’re interfaces Claude already understands fluently, with a decade of training examples to draw from.

Google’s December 2025 announcement of managed MCP servers suggests the protocol will continue expanding. Enterprise features (audit logging, credential management, sandboxing) will improve. More tools will gain MCP interfaces.

But for BigQuery workflows where bq and gcloud provide complete coverage, the fundamental efficiency advantages of native CLI remain. MCP adds overhead that may or may not be justified by your specific requirements.


Conclusion

The evidence supports a nuanced position: for GCP/BigQuery workflows, native CLI commands through Claude Code offer superior token efficiency (up to 98.7% savings in extreme cases), complete feature coverage (data loading, exports, table management), and leverage Claude’s extensive training on real-world shell commands.

MCP servers add genuine value for enterprise security requirements, structured response handling, SaaS integrations, and natural language analytics, but the token overhead is real and compounds across long sessions.

The tools aren’t mutually exclusive. Claude Code’s flexibility lets you configure both approaches, choosing based on context. Use CLI for data engineering operations where you need the full bq feature set. Use MCP when security, structure, or cross-platform orchestration justify the overhead.

The broader lesson extends beyond BigQuery. When evaluating any MCP integration, ask: does a CLI already exist? How extensive is its training data footprint? What capabilities does MCP add versus what does it abstract away?

Sometimes the newest protocol is the right choice. Sometimes the tool you’ve used for a decade is what your AI assistant already speaks fluently.

For BigQuery, it’s often the latter.