The shared service account is one of the most common forms of IAM debt in data platforms. A single etl-service-account@project.iam.gserviceaccount.com runs Airflow DAGs, Cloud Run Jobs, scheduled queries, and maybe a cron job on a Compute Engine instance nobody remembers creating. When something breaks or costs spike, you can’t determine which workload caused it. When you need to rotate credentials, you risk breaking everything simultaneously.
The fix is one service account per workload, named to be self-documenting in logs.
The Naming Convention
{platform-prefix}-{workload-name}@project.iam.gserviceaccount.comPlatform prefixes by compute environment:
crj— Cloud Run Jobcmp— Cloud Composer (Airflow)wlif— Workload Identity Federation (keyless auth for external systems)crf— Cloud Run Function (previously Cloud Functions)gce— Compute Enginegke— GKE workload
Examples:
crj-dbt-daily@project.iam.gserviceaccount.comcrj-dbt-hourly@project.iam.gserviceaccount.comcmp-extraction-dag@project.iam.gserviceaccount.comcmp-transformation-dag@project.iam.gserviceaccount.comwlif-github-actions@project.iam.gserviceaccount.comwlif-terraform-ci@project.iam.gserviceaccount.comWhen you see a query appear unexpectedly in INFORMATION_SCHEMA.JOBS_BY_PROJECT, the service account name immediately tells you which workload ran it and which compute platform it lives on. You can go directly to the Cloud Run Job console, Composer DAG list, or GitHub Actions workflow — no cross-referencing required.
Why Platform Prefix Matters
Service accounts show up in logs, cost reports, and audit trails without any other context. Without the prefix, dbt-daily and dbt-hourly look similar. With crj- prefix, you know both live in Cloud Run Jobs and you know where to look when investigating.
The prefix also creates natural groupings in alphabetical lists. All Cloud Composer service accounts cluster together. All Workload Identity Federation accounts cluster together. Sorting by service account email in a GCP console view becomes meaningful rather than arbitrary.
Minimal Permission Scoping
Each service account gets only what its workload needs. For a Cloud Run Job running dbt daily:
# Create the service accountgcloud iam service-accounts create crj-dbt-daily \ --display-name="Cloud Run Job - dbt daily run" \ --project=YOUR_PROJECT_ID
# Grant dataEditor on the datasets dbt writes togcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:crj-dbt-daily@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/bigquery.dataEditor"
# Grant jobUser to run queriesgcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:crj-dbt-daily@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/bigquery.jobUser"For a read-only Composer DAG that only checks job status:
gcloud iam service-accounts create cmp-status-check \ --display-name="Composer - status check DAG" \ --project=YOUR_PROJECT_ID
# dataViewer only — this workload never writesgcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:cmp-status-check@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/bigquery.dataViewer"
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:cmp-status-check@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/bigquery.jobUser"The principle: read workloads get dataViewer, write workloads get dataEditor, all workloads need jobUser to actually execute queries. Don’t grant dataEditor to workloads that only read. The audit will catch over-permissioning eventually; IAM Recommender will flag it. Better to start scoped correctly.
Service Account Impersonation for Local Development
Per-workload service accounts also solve the local development credential problem without distributing keys. Grant engineers the ability to impersonate the relevant service account:
gcloud iam service-accounts add-iam-policy-binding \ crj-dbt-daily@project.iam.gserviceaccount.com \ --member="user:engineer@yourdomain.com" \ --role="roles/iam.serviceAccountTokenCreator"Now the engineer authenticates locally with their own identity and impersonates the service account:
gcloud auth application-default login \ --impersonate-service-account=crj-dbt-daily@project.iam.gserviceaccount.com
dbt runThe GCP Application Default Credentials mechanism handles the credential exchange. The audit log records both the human identity and the service account they impersonated. No keys to rotate, no credentials to leak, and you know exactly who ran what and as which account.
Migrating from a Shared Account
When you’re splitting an existing shared service account into per-workload accounts:
- Run the shared service account detection query to understand what the account is actually doing
- Create new service accounts for each workload, scoped to only what that workload’s query history shows it actually accesses
- Update workload configurations one at a time (start with the lowest-risk, lowest-traffic workloads)
- Monitor
INFORMATION_SCHEMA.JOBS_BY_PROJECTto confirm the old account’s job count drops to zero - Disable the old service account (don’t delete immediately — give it a week to confirm nothing missed)
- Delete the old account once you’re confident nothing depends on it
Don’t try to split everything at once. The shared account is shared because it was convenient; undoing that convenience requires careful sequencing.