Between “just use Cloud Scheduler” and “deploy Cloud Composer” sits Cloud Workflows: serverless orchestration at $0.01 per 1,000 steps. It fills the gap for teams that need multi-step pipeline coordination without Composer’s $300-400/month fixed cost.
What Cloud Workflows Does
Workflows provides the control flow primitives that Cloud Scheduler lacks:
- Conditional logic. Branch based on the result of a previous step. Run dbt only if extraction succeeded. Send different notifications for success versus failure.
- Parallel execution. Run independent steps concurrently. Extract from three sources simultaneously, then run dbt when all three complete.
- Error handling. Try/catch/retry semantics. Define what happens when a step fails — retry with backoff, skip to a fallback, or halt the workflow.
- Retry policies. Configure per-step retry with exponential backoff and maximum attempts.
You can call Cloud Run Jobs, Cloud Functions, HTTP endpoints, and various GCP APIs. The YAML-based syntax expresses dependencies and branching:
main: steps: - run_extraction: call: http.post args: url: https://extraction-job-url/ result: extraction_result - check_extraction: switch: - condition: ${extraction_result.body.status == "success"} next: run_dbt - condition: true raise: "Extraction failed" - run_dbt: call: http.post args: url: https://dbt-job-url/ result: dbt_result - notify: call: http.post args: url: https://slack-webhook/ body: text: "Pipeline completed"This example shows a common pattern: extract data, validate the result, run dbt transformations, then notify the team. Four steps, conditional branching, and failure handling — all for fractions of a cent per execution.
The Hybrid Pattern: Workflows Plus Cloud Run Jobs
The most practical architecture for multi-step data pipelines on GCP pairs Cloud Workflows with Cloud Run Jobs:
- Cloud Scheduler triggers the workflow on a cron schedule
- Cloud Workflows orchestrates the sequence of steps
- Cloud Run Jobs execute each heavy-lifting task (extraction, dbt, validation)
- Eventarc optionally triggers additional workflows based on events
Each component does what it’s good at. Scheduler handles timing. Workflows handles sequencing and control flow. Cloud Run Jobs handle compute-intensive execution. The total cost remains negligible compared to Composer — typically under $10/month for daily multi-step pipelines.
What You Give Up
The trade-offs versus Composer are specific and worth understanding before committing:
No backfill command. Airflow’s backfill lets you rerun pipelines for arbitrary date ranges with dependency resolution. Workflows has no equivalent. If you need to reprocess three months of data after a schema change, you’re writing a script to invoke the workflow with parameterized dates — manageable, but manual.
No operational UI. Workflows has an execution log in the GCP Console, but it’s a list of runs with status and duration. There are no Gantt charts, no dependency graphs, no task-level drill-down. For debugging, you’re parsing Cloud Logging. For operational awareness across a team, you’re building a custom dashboard or relying on Slack notifications.
No operator ecosystem. Airflow has hundreds of operators for interacting with different systems — S3, Snowflake, dbt Cloud, Slack, PagerDuty. Workflows interacts with services through HTTP calls and GCP API connectors. Anything Airflow does through a dedicated operator, Workflows does through a raw HTTP call. This is fine for simple integrations but becomes tedious when you need to interact with many external systems.
Limited state management. Workflows can pass data between steps via variables, but there’s no equivalent to Airflow’s XCom for sharing large or complex data between tasks. Step outputs are limited in size, so you’ll use Cloud Storage or a database for intermediate results rather than passing them through the workflow.
When Workflows Is the Right Choice
This middle-ground option suits teams that meet these criteria:
- Need multi-step orchestration. Your pipeline has more than one job that needs to run in sequence with error handling. Cloud Scheduler alone can’t express “run B only if A succeeded.”
- Backfill requirements are minimal. You rarely need to reprocess historical data, or you can handle it manually when it comes up.
- Cost sensitivity rules out Composer. The $300-400/month minimum isn’t justifiable for your workload.
- Pipeline complexity is moderate. Fewer than ten steps. If you’re building DAGs with dozens of tasks and complex dependency trees, Airflow’s model will serve you better.
- You’re comfortable without Airflow’s UI. Your team can debug from logs and doesn’t need visual pipeline monitoring.
For pipelines that sequence two to five Cloud Run Jobs with conditional logic and error handling, Workflows hits a sweet spot: expressive enough to handle real orchestration needs, cheap enough that cost never enters the conversation, and simple enough that the YAML definition is the entire system — no infrastructure to manage, no environment to maintain.
The decision framework positions Workflows relative to the other options. As a heuristic: if Cloud Scheduler isn’t enough and Composer is too expensive for the workload, Workflows is the natural middle option.