AI coding agents introduced a new class of GCP authentication constraints beyond the standard multi-project problem. Three constraints apply to every agent, regardless of vendor:
- No interactive OAuth flows.
gcloud auth loginopens a browser window for user consent. Terminal agents can’t handle this. There’s no callback URL, no user interaction surface. - No mid-session re-authentication. When tokens expire during a long task, agents can’t prompt you to re-authenticate. They fail silently or with cryptic errors.
- Config file conflicts when running in parallel. Multiple agents running simultaneously against the same
~/.config/gcloud/directory step on each other’s state.
How different agents handle these constraints varies considerably.
Claude Code
Claude Code defers entirely to environment credentials when connecting to Vertex AI or running gcloud commands. It uses standard ADC, which means it needs pre-authentication via gcloud auth login and gcloud auth application-default login before you start a session.
Token expiry is a known issue. A documented bug (GitHub #726) affects organizations with GCP session length restrictions. When credentials expire mid-session, Claude Code doesn’t detect or refresh them. It continues using stale cached credentials until it encounters a hard failure, typically an invalid_grant error. The only fix is restarting Claude Code entirely.
Headless environment authentication (device code flow, for environments without browser access) is an open feature request (GitHub #7100) with no timeline.
The practical implication: If you use GCP organization policies with short session lengths (common in enterprise security setups), Claude Code sessions will break more frequently. For most consulting setups with service account credentials (which don’t expire the same way), this is less of an issue.
What works reliably: Start Claude Code from inside a direnv-managed project directory. The agent inherits the shell’s environment variables, including CLOUDSDK_CONFIG, GOOGLE_APPLICATION_CREDENTIALS, and GOOGLE_CLOUD_PROJECT. The agent uses the right project’s credentials automatically without any special configuration. See direnv Multi-Client GCP Setup for the setup.
Security dimension: Research has documented that malicious repositories can use MCP integrations or shell hooks to exfiltrate active gcloud credentials before you’ve granted trust to the project. This is one reason per-project CLOUDSDK_CONFIG isolation matters beyond the multi-client problem — a credential theft is limited to one project’s temporary config directory, not your entire ~/.config/gcloud/.
OpenAI Codex
Codex takes the opposite architectural approach. It runs in isolated containers with no internet access by default. Secrets can be provided via environment variables during setup, but they’re removed before the agent execution phase starts.
This means persistent GCP credential access is architecturally impossible by default. If you need Codex to interact with GCP, you have to explicitly enable network access and wire up credentials yourself — it’s a deliberate design choice, not an oversight.
Codex has added device code auth (codex login --device-auth) for its own Codex account authentication, but GCP credential management remains entirely your responsibility. The isolation model is strong by design. The tradeoff is that credential plumbing has to be done upfront, explicitly, before any agent execution.
Cursor
Cursor’s agent terminal runs in a sandboxed, non-interactive subshell. It does not inherit user environment variables and does not source shell config files like .zshrc or .bashrc.
This has a direct consequence: interactive cloud CLI commands are completely non-functional. Running gcloud auth login inside Cursor’s agent terminal will fail because there’s no browser, no shell sourcing, and no environment variable inheritance.
The community workaround is using MCP servers for GCP interactions instead of direct gcloud CLI commands. An MCP server configured with credentials at startup can accept structured requests from the agent without requiring CLI access. Cursor and Windsurf support project-level MCP configurations, meaning the MCP server restarts with the right credentials when you switch projects.
Service Accounts Across All Agents
For all agents, service account JSON key files remain the most reliable authentication method. The reasons:
- Service accounts don’t require interactive browser flows
- Key files don’t expire during sessions (though they can be revoked)
- Any tool that reads
GOOGLE_APPLICATION_CREDENTIALSworks without modification - Key files work in sandboxed, non-interactive environments
Google strongly recommends against service account key files in favor of Workload Identity Federation. WIF is genuinely better security — short-lived tokens, no static keys on disk, proper audit trails. But WIF requires an external identity provider (GitHub Actions OIDC, AWS IAM) to vouch for your workload. A developer laptop isn’t GitHub Actions. There’s no external identity provider for local development.
The middle ground — service account impersonation — generates a short-lived access token without a persistent key file:
gcloud auth print-access-token \ --impersonate-service-account=dbt-runner@acme-prod.iam.gserviceaccount.comThis generates a 60-minute token with a proper audit trail. The problem for agent workflows is that tokens expire and need refreshing, which adds friction precisely when you want agents to run uninterrupted. See Service Account Key vs Impersonation Tradeoffs for the full analysis of when each approach is appropriate.
Where This Is Heading
Google’s official MCP servers for BigQuery, Cloud SQL, and other services shift authentication from the shell layer to the protocol layer. Instead of an agent running gcloud commands in your shell and inheriting (or fighting with) your global state, the agent talks to an MCP server that manages its own credentials.
For multi-project work, this is the architecturally correct direction. The failure mode to watch for is context drift: your IDE open to Client A’s project while the background MCP process is still authenticated to Client B. Project-level mcp_config.json files prevent this. The ecosystem is still maturing as of early 2026, but it’s the right direction.