BigQuery’s permission model separates data access from compute access. Each must be granted at the appropriate scope.
The Core Principle: Separate Data from Compute
The essential pattern separates two concerns:
Project-level: compute permissions. Grant bigquery.jobUser to give users the ability to run queries. This is the “can you use BigQuery at all?” permission. It lets them execute jobs but doesn’t determine what data they can access.
Dataset-level: data permissions. Grant bigquery.dataViewer or bigquery.dataEditor to control what data users can see and modify. This determines which tables are accessible.
A user with jobUser on project-analytics and dataViewer on project-warehouse.sales_data can run queries in project-analytics that read from project-warehouse.sales_data. They cannot read from datasets where they lack dataViewer. They cannot run queries in projects where they lack jobUser.
This two-axis model is what makes the multi-project patterns work: you control where compute happens separately from where data lives.
Predefined Roles to Use
Google provides BigQuery-specific predefined roles. Use these instead of basic roles (Owner, Editor, Viewer), which predate IAM and grant excessive, poorly-scoped access.
| Role | What it grants | Typical scope |
|---|---|---|
bigquery.dataViewer | Read tables and views | Dataset |
bigquery.dataEditor | Create, update, delete tables | Dataset |
bigquery.jobUser | Run queries and jobs | Project |
bigquery.user | List datasets + run queries | Project |
bigquery.admin | Full BigQuery control | Rarely needed |
The bigquery.user role is a superset of jobUser that also lets users list datasets and create new ones. For most analytics users, jobUser plus explicit dataViewer grants is more appropriate because it prevents accidental dataset creation.
Service Account Strategy
Create dedicated service accounts for each function in your data stack:
sa-dbt-dev@analytics-dev.iam.gserviceaccount.comsa-dbt-prod@analytics-prod.iam.gserviceaccount.comsa-fivetran@analytics-prod.iam.gserviceaccount.comsa-looker@analytics-prod.iam.gserviceaccount.comsa-airflow@analytics-prod.iam.gserviceaccount.comEach service account gets only the permissions it needs:
- dbt production:
bigquery.dataEditor(to create tables) +bigquery.jobUser(to run queries) - Looker:
bigquery.dataVieweronly. It should never create or modify tables. - Fivetran/ingestion:
bigquery.dataEditoron raw datasets only, not on downstream models. - Airflow/orchestrator:
bigquery.jobUserto trigger jobs, potentiallydataViewerto check dependencies.
Rotate service account keys every 90 days when using key files. Workload Identity Federation is preferred for GCP-hosted workloads — Cloud Run or GKE can authenticate without key management.
For local development, service account impersonation is the recommended pattern. Developers authenticate via OAuth (their own identity) but impersonate a service account with appropriate permissions. This avoids distributing key files while maintaining clear audit trails.
What to Avoid
Basic roles (Owner, Editor, Viewer) grant broad permissions across all GCP services, not just BigQuery. An Editor can modify Cloud Storage buckets, Compute Engine instances, and everything else. Never use these for BigQuery access control. If someone has Editor on a project “because they needed BigQuery access,” that’s a security problem masquerading as a convenience choice.
Project-level bigquery.admin gives full control over all BigQuery resources in a project. Most users don’t need this. Most service accounts definitely don’t need this. Grant admin access only to the small number of people who actually administer BigQuery settings — things like managing reservations, setting project-level quotas, or configuring audit logging.
Overly broad dataset access happens when teams grant dataViewer on all datasets to avoid permission request tickets. This defeats the purpose of dataset-level isolation. If someone needs access to specific data, grant access to that specific dataset. Yes, this creates more IAM bindings. That’s the point.
Broad admin grants to avoid access friction. This creates security risks, audit noise, and confusion. When a credential leaks, the blast radius is the entire project. The pattern typically emerges when someone needed access quickly and the permissions were never cleaned up.
IAM as Code
The solution to permission sprawl is treating IAM as code: define permissions in Terraform or similar, require review for changes, and audit regularly.
Start by identifying over-permissioned principals:
- Any user with
bigquery.adminat project level (most don’t need it) - Service accounts with
dataEditoron datasets they only read - Basic roles (Owner/Editor/Viewer) granted anywhere
Downgrade to the minimum required permissions. This may cause temporary access issues, but those are better addressed intentionally than discovered during an incident.
# Example: Terraform IAM for a dbt production service accountresource "google_bigquery_dataset_iam_member" "dbt_prod_editor" { for_each = toset(["base", "intermediate", "marts"]) dataset_id = each.value role = "roles/bigquery.dataEditor" member = "serviceAccount:sa-dbt-prod@analytics-prod.iam.gserviceaccount.com"}
resource "google_project_iam_member" "dbt_prod_job_user" { project = "analytics-prod" role = "roles/bigquery.jobUser" member = "serviceAccount:sa-dbt-prod@analytics-prod.iam.gserviceaccount.com"}Permissions defined in code are reviewable, auditable, and reproducible.