If you’re running dbt on BigQuery, slot management requires extra consideration. dbt’s execution pattern — many sequential SQL statements — interacts with slots differently than a single complex query or a dashboard workload. Understanding these interactions is what separates a well-tuned dbt deployment from one that’s either overspending on capacity or suffering from chronic slowdowns.
Why dbt Is Compute-Heavy
A typical dbt run might execute hundreds of models. Each model is a separate BigQuery job. Each job:
- Requests slots from the reservation
- Potentially triggers autoscaling
- Holds autoscaled slots for the 60-second minimum window
- Completes, releases slots, then the next model starts
For sequential execution (one model at a time), you might have one job running, but each can trigger a new autoscale window. For parallel execution (threads > 1), multiple jobs compete for your reservation’s capacity through fair scheduling.
Heavy dbt runs — especially full refreshes of large models — can consume significant slot capacity. A full refresh of a multi-terabyte fact table might need 500+ slots for 30 minutes. If that runs alongside your regular incremental models, you need enough capacity for both.
The 60-second autoscale window is particularly relevant for dbt. Consider a run with 200 models, each taking 5 seconds. With autoscaling, each model that triggers a scale-up keeps those slots allocated for 60 seconds. Your actual billed slot-time can be 12x your query execution time. This is why the autoscaling 1.5x cost multiplier exists — dbt’s many-small-queries pattern is the exact workload shape that suffers from the 60-second minimum.
The Current dbt Limitation
As of now, dbt-bigquery uses a single project for both:
- Running queries (determines which reservation’s slots are used)
- Storing output tables
There’s no native way to say “run this model using reservation X.” GitHub issues #2918, #3708, and #1228 track requests for runtime reservation selection, but it’s not implemented yet.
This means your slot allocation is determined entirely by which GCP project dbt uses, and that project is also where your tables land. You can’t (yet) separate “where I want compute” from “where I want data.”
Workaround: Separate Projects by Workload
The practical solution is multiple GCP projects, each assigned to an appropriate reservation. This aligns with the multi-project pattern that’s already best practice for environment isolation.
jaffle_shop: target: prod outputs:
# High-priority production runs prod: type: bigquery method: service-account project: mycompany-dbt-prod # Assigned to 'prod' reservation (500 slots) dataset: analytics threads: 8
# Batch/backfill jobs batch: type: bigquery method: service-account project: mycompany-dbt-batch # Assigned to 'batch' reservation (200 slots) dataset: analytics threads: 4
# Development dev: type: bigquery method: service-account project: mycompany-dbt-dev # On-demand pricing dataset: dev_adrienne threads: 4Then run with the appropriate target:
# Production incremental rundbt run --target prod --select state:modified+
# Full refresh backfilldbt run --target batch --full-refresh
# Development iterationdbt run --target dev --select my_modelThis gives you three distinct capacity profiles. Production gets guaranteed slots for daily incremental runs. Batch gets a moderate reservation for heavy-but-not-time-critical work. Development stays on on-demand where you pay per byte scanned and don’t waste reserved capacity on sporadic commands.
Best Practices for dbt + Slots
1. Use incremental models aggressively
Incremental models process only new/changed data, dramatically reducing slot consumption compared to full refreshes. A model that scans 2 TB on full refresh might scan 20 GB incrementally. That’s 100x less slot-time.
2. Right-size your threads
More threads means more concurrent jobs and higher slot demand. Match your threads setting to your reservation’s capacity. If you have 200 slots and run 16 threads of complex models, you’ll hit contention. Fair scheduling divides your project’s slot allocation equally among concurrent jobs — 200 slots / 16 threads = 12.5 slots per model. That’s not enough for complex transformations.
Start with threads: 4 and increase gradually while monitoring slot utilization. The sweet spot depends on model complexity and reservation size.
3. Consider on-demand for dev
Development work is unpredictable. On-demand pricing means you don’t waste reserved capacity on sporadic dbt run commands. A developer might run 5 models in the morning, nothing for 3 hours, then a full refresh of a staging model. On-demand handles this without any capacity sitting idle.
4. Set realistic baselines for production
If your production dbt runs at 6 AM daily and consistently uses 300 slots for 45 minutes, set baseline at 300. You’ll get guaranteed capacity when you need it. Let autoscaling handle the occasional spike above 300.
Analyze your slot usage over 30 days before setting baseline. The P50 (median) usage is a good starting point for baseline, with autoscaling headroom above that.
5. Monitor slot usage by model
dbt automatically labels jobs when configured with job labels. Query INFORMATION_SCHEMA to find your most expensive models:
SELECT (SELECT value FROM UNNEST(labels) WHERE key = 'dbt_invocation_id') AS invocation, (SELECT value FROM UNNEST(labels) WHERE key = 'dbt_model') AS model, COUNT(*) AS job_count, SUM(total_slot_ms) / 1000 / 60 AS total_slot_minutes, AVG(SAFE_DIVIDE(total_slot_ms, TIMESTAMP_DIFF(end_time, start_time, MILLISECOND))) AS avg_slotsFROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHERE creation_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) AND (SELECT value FROM UNNEST(labels) WHERE key = 'dbt_invocation_id') IS NOT NULLGROUP BY invocation, modelORDER BY total_slot_minutes DESCLIMIT 20;This surfaces models that consume the most slot-time, making them prime candidates for optimization. Applying partition pruning and proven SQL patterns to these models often delivers the biggest improvements.
6. Schedule heavy work off-peak
If your BI reservation shares idle capacity with your production reservation via idle slot sharing, schedule full refreshes and heavy backfills during off-peak BI hours. Your dbt batch jobs borrow idle BI slots, and your BI queries borrow idle dbt slots. Complementary schedules maximize the value of idle sharing.