Migration from BigQuery on-demand to Editions is technically simple — create a reservation, create an assignment. Getting the configuration right is harder. The same anti-patterns appear consistently and result in significant cost overruns or performance regressions.
Anti-Pattern 1: Over-Provisioning Maximum Slots
The autoscaler optimizes for query speed, not cost. Set max_slots to 1,600 and BigQuery will regularly scale to 1,600 even for queries that would complete reasonably at 400. It uses whatever ceiling you give it to minimize latency.
One engineering team’s experience: they set max_slots = 1600 because peak workloads occasionally needed that much. In practice, BigQuery scaled to 1,600 slots frequently, often for queries where P99 capacity was the only workload requiring it. Only their P99 queries actually needed that headroom.
The fix: Set maximum slots to your P90 historical usage, not P99 or your absolute peak. Use the slot histogram query to understand your distribution:
WITH hourly_slots AS ( SELECT TIMESTAMP_TRUNC(period_start, HOUR) AS hour, SUM(period_slot_ms) / 1000 / 3600 AS slots_consumed FROM `region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE WHERE job_creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND (statement_type != 'SCRIPT' OR statement_type IS NULL) GROUP BY hour)SELECT ROUND(APPROX_QUANTILES(slots_consumed, 100)[OFFSET(90)]) AS p90_slots, ROUND(APPROX_QUANTILES(slots_consumed, 100)[OFFSET(99)]) AS p99_slots, ROUND(MAX(slots_consumed)) AS max_slotsFROM hourly_slots;Start at P90. The autoscaler handles occasional bursts acceptably; you don’t need permanent headroom for every possible spike. Increase the maximum later if you observe consistent slot contention. It’s instant to increase, while reducing a committed slot count requires waiting for renewal.
Anti-Pattern 2: Ignoring the 60-Second Minimum Billing Window
A 100-slot query completing in 5 seconds is billed for 100 slot-minutes, not 8.3 slot-seconds. This is the 60-second autoscaling window — slots remain allocated for a minimum of 60 seconds after a scaling event before BigQuery scales back down.
This makes Editions relatively expensive for workloads consisting of many short queries. If your typical pattern is thousands of sub-minute operations — interactive BI tools, data exploration, API-driven micro-queries — the 60-second minimum dramatically inflates costs compared to theoretical calculations. The cost gap between theory and reality grows as query duration shortens.
The fix: Identify your query duration distribution before migrating:
SELECT EXTRACT(HOUR FROM creation_time) AS hour_of_day, COUNT(*) AS query_count, ROUND(AVG(TIMESTAMP_DIFF(end_time, start_time, SECOND)), 1) AS avg_duration_sec, COUNTIF(TIMESTAMP_DIFF(end_time, start_time, SECOND) < 60) AS queries_under_60sec, ROUND(COUNTIF(TIMESTAMP_DIFF(end_time, start_time, SECOND) < 60) / COUNT(*) * 100, 1) AS pct_under_60secFROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHERE creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND job_type = 'QUERY' AND state = 'DONE'GROUP BY hour_of_dayORDER BY hour_of_day;If more than 50% of your queries complete in under 60 seconds, apply a 2x or higher multiplier to theoretical slot costs rather than the standard 1.5x. For workloads dominated by sub-minute queries, staying on on-demand or keeping that query population on on-demand via a hybrid approach is usually the right call.
Anti-Pattern 3: Migrating Without Optimizing Queries First
Inefficient queries consuming excessive slots don’t improve with Editions. They just consume your reserved capacity faster. A query scanning 10 TiB when it could scan 100 GiB with proper partitioning wastes slots just as it wastes bytes.
With on-demand, each inefficient query shows up as a line item on your bill — you feel the waste directly. With Editions, inefficient queries silently eat into your reservation capacity, slowing other queries through slot contention rather than showing up as explicit costs. The waste is harder to see and easier to ignore.
The fix: Optimize before migrating. Implement partitioning, clustering, and column pruning on your highest-cost tables. The query optimization work pays dividends regardless of pricing model, but it’s especially important with Editions because you’re paying for compute capacity whether it’s used efficiently or not.
Run this against your INFORMATION_SCHEMA to find the top candidates:
SELECT user_email, job_id, SUBSTR(query, 1, 200) AS query_preview, ROUND(total_bytes_billed / POW(1024, 4) * 6.25, 2) AS estimated_cost_usd, ROUND(total_slot_ms / 1000 / 3600, 3) AS slot_hours, TIMESTAMP_DIFF(end_time, start_time, SECOND) AS duration_secondsFROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHERE creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND job_type = 'QUERY' AND state = 'DONE'ORDER BY total_bytes_billed DESCLIMIT 25;Fix the top-cost queries before migrating. The queries that cost the most on on-demand will consume the most slots on Editions — and the cost feedback is more visible on on-demand.
Anti-Pattern 4: Purchasing Commitments Before Testing
One-year and three-year commitments offer 20% and 40% discounts respectively, but they lock you into slot counts that might not match actual needs. A commitment purchased based on estimates often proves wrong in practice — either too large (overpaying for unused capacity) or too small (generating contention you didn’t anticipate).
The discount and the lock-in are both real. A one-year commitment for 500 Enterprise slots at $0.048/slot-hour costs you $210,240 over the year. If you discover after month two that you actually needed 300 slots, you’ve committed to $0.048 × 200 excess slots × ~9,000 remaining hours = ~$86,400 in overhead you can’t escape.
The fix: Test with pay-as-you-go for 30+ days first. Run representative workloads (not just a sample — include your ETL windows, your peak BI hours, your monthly batch jobs) and observe actual slot consumption. Only then purchase commitments based on observed data, not estimates. See BigQuery Editions Testing Without Commitment for the full test-then-commit workflow.
If your analysis shows you need 200 baseline slots with bursts to 600, commit to 150–200 slots and let autoscaling handle the rest. The commitment discount on baseline slots plus pay-as-you-go for bursts often beats committing to peak capacity you rarely use.
Anti-Pattern 5: Running All Workloads Through One Reservation
ETL jobs, BI dashboards, and ad-hoc exploration have fundamentally different slot consumption characteristics. ETL runs on predictable schedules with sustained, heavy compute. Dashboards need low latency but sporadic capacity. Ad-hoc queries are unpredictable in timing and resource requirements.
When all these workloads share one reservation, they compete for the same slot pool. A heavy ETL job can starve interactive dashboard queries. An analyst running an expensive ad-hoc scan can delay a scheduled pipeline. The reservation becomes a bottleneck rather than a capacity guarantee.
The fix: Separate projects by workload type and apply appropriate pricing models or reservation configurations to each:
- On-demand for ad-hoc queries and development: sporadic patterns favor per-TiB pricing. Projects without a reservation assignment automatically use on-demand.
- Enterprise Edition with committed slots for scheduled ETL: sustained compute with predictable timing is where Editions delivers the most savings.
- A separate reservation with priority weighting for BI dashboards: moderate capacity with autoscaling to handle concurrent user load.
You can also create multiple reservations within a single Editions setup, assigning different maximum slots and priorities to different workload types. This prevents an expensive ETL job from starving interactive queries while still providing the cost benefits of slot-based pricing for your heaviest workloads.
The reservation hierarchy — commitments → reservations → assignments — is designed exactly for this pattern. The assignment layer routes different GCP projects to different reservations without any application-level changes. Your production dbt project uses the ETL reservation’s slots; your Looker project uses the BI reservation. Neither project knows or cares about the routing.