GitHub Actions scheduled workflows can run dbt build on a cron expression with credentials stored as repository secrets and Python installed fresh on each run. For dbt projects already on GitHub, this approach requires no additional infrastructure, no GCP configuration, and no new platforms.
The Setup
A minimal scheduled dbt run on GitHub Actions:
name: dbt daily runon: schedule: - cron: '0 6 * * *' workflow_dispatch:
jobs: dbt-build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.11' - run: pip install dbt-bigquery - run: dbt deps && dbt build --target prod env: DBT_PROFILES_DIR: . GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GCP_SA_KEY }}The workflow_dispatch trigger lets you manually kick off a run from the GitHub Actions UI, which is useful for debugging and for triggered runs outside the schedule. Add it to every scheduled workflow.
For BigQuery authentication, you have two options: store the service account JSON as a secret and pass it via GOOGLE_APPLICATION_CREDENTIALS, or use GitHub’s OIDC integration with GCP Workload Identity Federation for keyless auth. The keyless approach is more secure — no long-lived credentials in your secrets — but requires a one-time GCP configuration to establish the trust relationship.
For profiles.yml, you can either commit a production profile to the repo (with no sensitive values) and pass the service account separately, or generate the profile dynamically in the workflow step using environment variables. The latter keeps all secrets out of the repository entirely:
- name: Create profiles.yml run: | mkdir -p ~/.dbt cat > ~/.dbt/profiles.yml << EOF my_project: target: prod outputs: prod: type: bigquery method: service-account project: ${{ secrets.GCP_PROJECT_ID }} dataset: analytics keyfile: /tmp/gcp-key.json threads: 4 EOF echo '${{ secrets.GCP_SA_KEY }}' > /tmp/gcp-key.jsonWhat You Get
Zero additional infrastructure. The workflow YAML is version-controlled alongside the dbt project, reviewed in PRs, and visible to every team member with repository access.
Free compute tier. Private repos get 2,000 minutes per month on the free tier. A typical dbt run takes 3-10 minutes including Python and dependency installation, which covers daily and often hourly scheduling within the free tier. Public repos get unlimited minutes.
Manual trigger support. The workflow_dispatch event enables on-demand runs from the GitHub UI without modifying the cron schedule.
PR-based visibility. Changes to the workflow file go through the normal PR process, with a full audit trail and git revert as rollback.
The Tradeoffs
GitHub Actions for scheduling comes with real limitations that are worth internalizing before you commit to it.
Timing is approximate. GitHub doesn’t guarantee exact cron execution during high-load periods. A workflow scheduled for 6 AM might start at 6:03 or 6:15. For analytical refreshes where a 15-minute window is acceptable, this is fine. If a downstream system depends on dbt completing before 6:05 AM, GitHub Actions won’t reliably deliver.
Cold starts on every run. Each run installs Python, dbt-bigquery, and all your dbt packages from scratch. On a project with many packages, this can add 3-5 minutes to every execution. There’s no package caching equivalent to what a dedicated container gives you. You can mitigate with Actions caching for pip packages, but it’s an extra configuration step:
- uses: actions/cache@v4 with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}No retry logic. A failed dbt build does not automatically retry. You can add retry logic manually with a third-party action or with shell loops, but it’s not native. For pipelines where transient failures (network timeouts, BigQuery quota spikes) are common, the lack of automatic retry is a material gap compared to Cloud Run Jobs’ built-in retry handling.
No cross-workflow dependency awareness. GitHub Actions workflows don’t know about each other. If you have an ingestion workflow that must complete before your dbt workflow, you can’t express “wait for workflow A to finish before starting workflow B” natively. Workarounds exist (polling the API, using workflow outputs) but they’re brittle. This is the most significant structural limitation — the moment you need “dbt runs only after Fivetran sync succeeds,” GitHub Actions stops being the right tool.
Artifacts are transient. dbt generates run artifacts (manifest.json, run_results.json) that downstream tools and state-based comparisons rely on. In GitHub Actions, these artifacts aren’t automatically persisted. You need to explicitly upload them to GitHub’s artifact storage or to Cloud Storage if you want to use dbt’s state-based selection (e.g., dbt build --select state:modified) across runs.
When This Is the Right Choice
GitHub Actions scheduling fits when:
- The team is already on GitHub and adding GCP infrastructure is not justified
- The dbt project has one clear daily (or near-daily) execution with no inter-job dependencies
- Run timing precision within a ±15-minute window is acceptable
- Zero additional monthly cost is a constraint
- The team is small and already familiar with GitHub Actions
When to Choose Cloud Run Instead
Cloud Run Jobs with Cloud Scheduler becomes the better choice when:
- You need reliable retry logic without custom code
- Your pipeline has dependencies on other jobs or systems
- You want persistent artifact storage across runs
- Run timing precision matters
- You’re already managing GCP infrastructure
The two approaches are not mutually exclusive. Many teams start with GitHub Actions and migrate to Cloud Run as requirements grow. The dbt project itself does not change — only what invokes dbt build changes. Migration cost is low.
For a broader view of where this fits in the decision space across all GCP-native scheduling options, the dbt Orchestration Decision Framework for GCP covers the full comparison including Cloud Workflows for multi-step pipelines.