The BigQuery cost model offers two ways to pay for compute: on-demand (pay per byte scanned) and Editions (pay per slot-hour). The break-even point depends on monthly scan volume, workload shape, and whether features specific to certain editions are required.
The Break-Even Calculation
BigQuery Editions pricing (introduced July 2023, replacing legacy flat-rate) charges for compute time in slot-hours rather than bytes scanned. Three tiers exist:
| Edition | Cost per Slot-Hour | Best For |
|---|---|---|
| Standard | $0.04 | Dev/test, ad-hoc analysis |
| Enterprise | $0.06 | Production, ML workloads |
| Enterprise Plus | $0.10 | Regulated industries, DR |
100 slots running continuously on Standard Edition PAYG costs $2,920 monthly. At $6.25 per TiB on-demand, that’s equivalent to processing 467 TB. The rule of thumb: if you process more than 400-500 TB monthly with consistent patterns, slots likely save money.
The calculation shifts for burst workloads. Processing 20 TB in under an hour:
- On-demand: $125 (20 x $6.25)
- 100 slots for 1 hour (Standard): ~$5
For burst scenarios, slots are cost-effective even at lower monthly volumes if workloads concentrate into windows. A team processing 100 TB monthly in a 4-hour dbt run window may save significantly compared to on-demand, even though 100 TB/month is below the 400–500 TB threshold for continuous workloads.
Autoscaling: The Modern Approach
Autoscaling became generally available in February 2025. Instead of committing to a fixed slot count, you configure a range and BigQuery scales within it.
Key configuration decisions:
Baseline slots: Set to zero for workloads with idle periods. Slots scale from zero instantly with no warmup penalty. You only pay for what you use. This eliminates the old problem of paying for idle slots during off-hours.
Maximum slots: Your ceiling during peak demand. Start conservative and increase based on observed wait times. If queries are queuing, raise the max.
Target utilization: Aim for 60-80% during peak hours. Consistently higher indicates under-provisioning (queries queue and slow down). Consistently lower suggests over-provisioning (paying for unused capacity).
Creating a reservation with autoscaling:
-- Create a reservation with autoscalingCREATE RESERVATION `project.region-us.prod_reservation`OPTIONS ( slot_capacity = 0, -- baseline edition = 'ENTERPRISE', autoscale = ( max_slots = 500 ));
-- Create an assignment for your projectCREATE ASSIGNMENT `project.region-us.prod_reservation.prod_assignment`OPTIONS ( assignee = 'projects/my-project', job_type = 'QUERY');The zero-baseline pattern is suited for batch workloads like dbt runs. The reservation sits at zero slots (zero cost) until dbt starts, scales up to handle the pipeline, then scales back to zero. Cost accrues only during the burst window.
Committed Use Discounts (2025)
Google Cloud Next 2025 introduced spend-based committed use discounts for BigQuery:
- 1-year commitment: 10% discount over PAYG
- 3-year commitment: 20% discount over PAYG
Unlike traditional slot-based commitments, these are dollar-denominated and flexible across BigQuery Editions, Cloud Composer 3, and Data Governance products. This simplifies capacity planning: commit to a monthly spend level rather than guessing slot counts.
A common pattern: combine commitment for predictable baseline capacity with autoscaling for bursts. This captures commitment discounts on regular workloads (daily dbt jobs, recurring dashboard refreshes) while maintaining flexibility for spikes (ad-hoc analysis, month-end reporting surges).
Edition Feature Comparison
Beyond price, editions differ in capabilities. Choosing a lower tier to save money may eliminate features required for certain workloads.
Standard Edition:
- Maximum 1,600 slots
- No BigQuery ML
- No materialized views with automatic refresh
- Best for: Development environments, ad-hoc analysis, cost-sensitive workloads
Enterprise Edition:
- Unlimited slots (with appropriate quota)
- BigQuery ML included
- BI Engine acceleration
- Full-text search
- 99.99% SLA
- Best for: Production workloads, ML pipelines, BI-heavy environments
Enterprise Plus:
- Everything in Enterprise
- Cross-region disaster recovery
- Compliance certifications (FedRAMP, CJIS, IL4)
- Best for: Regulated industries, critical workloads requiring DR
Important nuance: On-demand pricing maintains feature parity with Enterprise Plus (except continuous queries and managed disaster recovery). If you need compliance features but have unpredictable usage, on-demand may be more cost-effective than Enterprise Plus slots. Don’t choose Enterprise Plus solely for features when on-demand gives you those same features without capacity commitment.
Decision Framework
Consider these questions in order:
1. What’s your monthly scan volume? If under 400 TB/month with evenly distributed workloads, on-demand is probably cheaper. Run the self-assessment query from BigQuery Cost Model against your own INFORMATION_SCHEMA.JOBS data.
2. Are workloads bursty or steady? Burst workloads (everything runs in a 4-hour window) favor slots dramatically. Steady, evenly distributed workloads throughout the day favor on-demand unless volume is very high.
3. Do you need features exclusive to specific editions? BigQuery ML, BI Engine, materialized view auto-refresh, and compliance features vary by edition. Check whether on-demand (which includes all features) covers your needs before committing to an edition that might not.
4. Can you predict baseline spend? If you can reliably predict a minimum monthly spend, committed use discounts layer additional savings. If spend is highly variable, PAYG autoscaling is safer.
5. Do you need workload isolation? Slots enable reservation assignments that isolate teams from each other. On-demand uses a shared slot pool where one team’s heavy workload can slow another’s queries.
Monitoring Slot Utilization
Once on Editions, monitoring slot utilization is necessary. Under-provisioning causes queries to queue. Over-provisioning means paying for idle slots.
Query your slot utilization pattern:
SELECT TIMESTAMP_TRUNC(period_start, HOUR) AS hour, reservation_id, AVG(slot_assigned) AS avg_slots_assigned, MAX(slot_assigned) AS max_slots_assigned, AVG(slot_assigned) / MAX(slot_assigned) AS utilization_ratioFROM `region-us`.INFORMATION_SCHEMA.RESERVATIONS_TIMELINEWHERE period_start > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY)GROUP BY 1, 2ORDER BY 1 DESC;If utilization ratio stays below 0.5, your maximum is too high. If queries frequently queue (visible in INFORMATION_SCHEMA.JOBS via pending_units), your maximum is too low. Adjust weekly until you find the sweet spot.