You shipped your first dbt project on BigQuery. The models work, the tests pass, and your stakeholders are happy with the data in Looker. Now you need it to run every morning at 7am.
The natural instinct is to reach for an orchestrator. Dagster, Airflow, Prefect: the internet has opinions about all of them. But before you spend weeks evaluating platforms that cost $250-500/month, ask a simpler question: does a cron job cover what you actually need?
The cost cliff
There’s a dramatic gap between simple scheduling and managed orchestration. A daily dbt build triggered by Cloud Scheduler and Cloud Run costs $0-3/month on GCP’s free tier. Jump to Cloud Composer (managed Airflow) and you’re looking at $377/month minimum, even when nothing is running.
| Approach | Monthly cost |
|---|---|
| Cloud Scheduler + Cloud Run Job | $0-3 |
| GitHub Actions (free tier) | $0 |
| Cloud Workflows + Cloud Run | $0-5 |
| Dagster+ Solo | $10 |
| Dagster+ Starter | $100 |
| dbt Cloud Starter (5 seats) | $500+ |
| Cloud Composer 3 (small) | $377-400+ |
Match the tool to the problem. A solo practitioner running 15 dbt models on a daily schedule has fundamentally different needs than a team of eight managing cross-system dependencies with SLA commitments.
Cloud Scheduler + Cloud Run: production dbt for $0/month
A straightforward approach on GCP is Cloud Scheduler triggering a Cloud Run Job. No HTTP server, no always-on infrastructure. The job runs your dbt commands and exits.
The setup has four parts: containerize your dbt project, deploy it as a Cloud Run Job, schedule it with Cloud Scheduler, and route credentials through Secret Manager.
Start with a Dockerfile:
FROM ghcr.io/dbt-labs/dbt-bigquery:1.9.0
COPY . /usr/appWORKDIR /usr/app
RUN dbt deps
ENTRYPOINT ["dbt"]CMD ["build", "--target", "prod"]Build and deploy:
# Build the containergcloud builds submit --tag gcr.io/YOUR_PROJECT/dbt-runner
# Create the Cloud Run Jobgcloud run jobs create dbt-daily \ --image gcr.io/YOUR_PROJECT/dbt-runner \ --region europe-west1 \ --memory 1Gi \ --cpu 1 \ --task-timeout 3600 \ --set-secrets "GOOGLE_APPLICATION_CREDENTIALS=dbt-sa-key:latest" \ --service-account dbt-runner@YOUR_PROJECT.iam.gserviceaccount.com
# Schedule itgcloud scheduler jobs create http dbt-morning-run \ --schedule "0 7 * * *" \ --time-zone "Europe/Paris" \ --uri "https://europe-west1-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/YOUR_PROJECT/jobs/dbt-daily:run" \ --http-method POST \ --oauth-service-account-email dbt-scheduler@YOUR_PROJECT.iam.gserviceaccount.comCloud Run Jobs support up to 24-hour timeouts, so even large dbt projects won’t hit a wall. The free tier alone covers 180,000 vCPU-seconds per month, which is roughly 50 hours of compute.
For a detailed walkthrough of the deployment process, see my guide to deploying dbt on Cloud Run.
GitHub Actions: if you’re already there
For teams on GitHub, a scheduled workflow is the fastest path from zero to production scheduling. No GCP infrastructure to configure, no containers to build.
name: dbt daily runon: schedule: - cron: '0 6 * * *' workflow_dispatch:
jobs: dbt-build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.11' - run: pip install dbt-bigquery - run: dbt deps && dbt build --target prod env: DBT_PROFILES_DIR: . GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GCP_SA_KEY }}Private repos get 2,000 free minutes per month, and a typical dbt run takes 3-10 minutes including dependency installation. That’s plenty for daily or even hourly scheduling.
The tradeoffs are worth knowing. GitHub doesn’t guarantee exact cron timing during periods of high load, so a job scheduled for 6am might start at 6:03 or 6:15. There’s no built-in retry logic, no dependency awareness between workflows, and each run cold-starts by installing Python and dbt from scratch. For a daily analytical refresh where a 15-minute window is acceptable, none of these are dealbreakers.
Cloud Workflows for multi-step coordination
When your pipeline involves more than dbt build, Cloud Workflows adds a lightweight orchestration layer. Cloud Scheduler triggers Cloud Workflows, which coordinates multiple Cloud Run Jobs with error handling and conditional logic.
This is useful when you need to run an ingestion step before transformation, validate data between steps, or pass parameters dynamically. GetInData built dbt-workflows-factory, a Python library that generates Cloud Workflow YAML directly from dbt manifests.
Pricing stays low: 5,000 internal steps per month are free, then $0.01 per 1,000 steps. Combined with Cloud Run, you’re still under $5/month for most setups.
For a comparison between this approach and Cloud Composer, my Cloud Run vs Composer analysis covers the decision in more depth.
What about dbt Cloud?
dbt Cloud’s built-in scheduler handles cron scheduling, source freshness checks, CI builds on pull requests, and merge-trigger jobs. If you’re already paying for dbt Cloud, the scheduling is included and requires zero infrastructure work.
The question is whether you’re already paying. The Developer tier is free (1 seat, 3,000 model builds per month), which covers solo practitioners. Starter pricing jumps to $100/user/month with a 5-seat maximum. For a three-person consulting team, that’s $300/month for scheduling that Cloud Run handles for free.
dbt Cloud makes sense when the team lives entirely in SQL, never needs to orchestrate anything outside of dbt (no ingestion, no Python processing, no API calls), and values the managed CI/CD experience. The moment you need cross-system dependencies or Python transformations in the same pipeline, you’ll outgrow it. I’ve written more about this tradeoff in my dbt Core vs dbt Cloud comparison.
Signs it’s time to upgrade
Simple orchestration doesn’t break catastrophically. Instead, trust erodes through small misalignments. Finance gets technically correct but business-incorrect numbers because upstream data wasn’t ready. Pipelines complete “successfully” but deliver insights too late. Monitoring shows green while stakeholders see stale dashboards.
By the time the band-aids fail, migration is urgent rather than planned.
Watch for these concrete triggers:
-
More than three interconnected pipelines that depend on each other. If pipeline B must wait for pipeline A, and pipeline C needs both, you’ve outgrown sequential cron jobs.
-
Silent failures becoming frequent. Your
dbt buildsucceeded, but the source data it processed was from yesterday because the upstream sync hadn’t completed. Nothing in your logs shows a problem. -
Cross-system dependencies. dbt needs to wait for a Fivetran or Airbyte sync, or a Python model needs to run after dbt completes, or a BI dashboard refresh should trigger only when fresh data lands.
-
Team growth beyond three contributors. Multiple people scheduling cron jobs in different places creates confusion about what runs when and who owns what.
-
SLA commitments from stakeholders. When someone says “the board report must use data from the last 6 hours,” you need freshness monitoring, not just “did the job run.”
The typical progression follows a predictable arc. You start with a single cron job running dbt build. Then you add a second script, maybe some Slack alerts. Then inter-pipeline dependencies grow and the Slack alerts multiply. Eventually the band-aids can’t hold, and you migrate to a real orchestrator under pressure.
Recognize stage 3 and plan the migration at stage 4, before stage 5 forces it.
Choosing your next step
The jump from cron to a full orchestrator doesn’t have to be a cliff. Two options bridge the gap.
Dagster+ Solo at $10/month gives you asset-aware scheduling, a visual DAG, and freshness monitoring for 7,500 materializations per month. For a small dbt project, that’s months of daily runs. It’s the natural next step when you need visibility into data freshness rather than just job execution. The Dagster fundamentals guide covers what analytics engineers need to know.
Prefect Cloud’s free tier offers a Python-first approach with 500 serverless compute minutes. If your team prefers writing plain Python functions over learning a new framework, it’s worth evaluating.
Both options let you start small and scale up, which is the right approach for teams that aren’t yet sure how complex their orchestration needs will become. For a detailed comparison of when to choose Dagster, Airflow, or Prefect, see my decision framework for orchestration tools.
Start where you are
A daily dbt build running on Cloud Run with a Cloud Scheduler cron trigger is production-grade infrastructure. It costs almost nothing, takes an afternoon to set up, and handles the needs of most small teams for months or years. For a broader view of how these pieces fit into a GCP data platform architecture, that guide covers the full stack.
When the cron job stops being enough, you’ll know. The triggers above will start showing up in your daily work. That’s the right time to evaluate orchestrators, not before.