Partitioning and clustering are BigQuery’s two primary cost levers. Applied correctly, they can reduce query costs by 90–99%. This hub covers mechanics, decision framework, configuration patterns, and anti-patterns across five notes.
Prerequisites
Familiarity with BigQuery’s storage and compute separation and the BigQuery cost model (pay per byte scanned on-demand) is assumed.
Reading Order
1. BigQuery Partitioning Mechanics — how partitioning physically divides tables, the three partitioning types (time-unit, ingestion-time, integer range), the 10,000 partition limit, and when partition pruning works and doesn’t.
2. BigQuery Clustering Mechanics — how clustering sorts data within storage blocks, why column order matters, automatic re-clustering at no cost, and the four-column limit.
3. BigQuery Partitioning vs Clustering Decision Framework — when to use partitioning alone, clustering alone, or both. Includes size thresholds (64MB, 1GB, 10GB, 100GB), the small partitions trap, validation queries, and dbt-specific gotchas.
4. BigQuery Partitioning Configuration Patterns — domain-specific dbt configurations for event data (GA4/clickstream), marketing/advertising, multi-tenant SaaS, and time-series/IoT workloads with rationale for column choices.
5. BigQuery Partition Pruning Patterns — anti-patterns that silently defeat pruning (functions on partition columns, subqueries in filters, over-partitioning, wrong clustering column order), enforcing partition filters, and measuring pruning effectiveness with INFORMATION_SCHEMA.
Related Topics
For how partitioning and clustering interact with dbt incremental models, see Incremental Strategy Decision Framework and dbt Incremental Strategy Configuration Patterns. The insert_overwrite strategy is particularly relevant — it requires partitioning and is the most cost-efficient incremental approach on BigQuery.