Star schema and one big table (OBT) are not mutually exclusive in a dbt project — both patterns are used at different layers of the three-layer architecture.
Star Schema
A star schema separates your data into fact tables (events, transactions) and dimension tables (entities, attributes). An opportunity fact table references dimension tables for account, owner, stage, and product through foreign keys. Queries join facts to dimensions as needed.
Strengths:
- Clear entity separation makes models independently testable and maintainable
- Smaller individual tables are faster to build during dbt runs
- Dimension updates don’t require rebuilding the fact table
- Storage-efficient — no duplication of dimension attributes across every fact row
- Aligns naturally with intermediate models organized by entity
Weaknesses:
- Requires joins at query time, which adds latency for BI tools
- More complex SQL for end users and dashboards
- Join performance depends on how the warehouse handles them
One Big Table (OBT)
The OBT pattern pre-joins everything into a single wide, denormalized table. One row per fact record, with all dimension attributes already included. No joins needed at query time.
Strengths:
- Faster query performance — no joins, single table scan
- Simpler SQL for BI tools and dashboards
- Better BI tool compatibility — some tools struggle with multi-table joins
- Columnar storage means width doesn’t penalize queries that select few columns
Weaknesses:
- Larger storage footprint due to attribute duplication
- Longer build times during dbt runs (must rebuild when any dimension changes)
- Harder to maintain — changes to a dimension affect the entire OBT rebuild
- Can become unwieldy at hundreds of columns
BigQuery Specifically Favors OBT
BigQuery’s architecture gives OBT a measurable advantage. The combination of columnar storage, massively parallel execution, and automatic query optimization means wide tables perform well.
Fivetran’s benchmarks show OBT outperforming star schema by 25-50% across cloud warehouses, with BigQuery showing the largest gap at 49% faster query times for OBT versus equivalent star schema queries. BigQuery’s parallel execution engine handles wide tables particularly well because it reads only the columns referenced in your query — a 200-column OBT queried for 5 columns scans only those 5 columns.
This doesn’t mean star schema is wrong on BigQuery. It means the join overhead that star schema imposes is more noticeable on BigQuery than on warehouses optimized for pre-computed joins.
Using Both Patterns
A well-structured dbt project uses both patterns at different layers:
Intermediate layer: entity separation. Build your intermediate models with clear entity boundaries. int__opportunity_enriched, int__account_enriched, int__contact_enriched. Each entity model is independently testable, debuggable, and reusable. This is your star schema — clean, normalized, serving as the single source of truth for each entity.
Mart layer: wide denormalized tables. Build your mart models as pre-joined wide tables for specific BI use cases. These are your OBTs — everything a dashboard needs in one table, no joins required. Each mart serves a specific consumer at a specific grain.
This is exactly what the Fivetran packages do. The salesforce__opportunity_enhanced model pre-joins opportunity data with account, owner, and stage information into a single wide table ready for BI consumption. Underneath it, the source and staging models maintain clean entity separation.
[base models: 1-to-1 with source tables] ↓[intermediate: entity-separated, normalized] ↓[marts: wide, denormalized, consumer-specific]The intermediate layer provides maintainability and reuse. The mart layer provides query performance and simpler SQL for consumers.
When to Use Pure Star Schema
There are legitimate cases for exposing star schema directly to consumers:
- Semantic layer tools (Cube, MetricFlow, Looker LookML) that handle joins optimally and prefer normalized inputs. These tools build their own virtual OBTs from your star schema, so pre-joining in dbt is redundant.
- Exploration-heavy environments where analysts write their own SQL and need flexibility to join entities in ways you didn’t anticipate. A star schema with clean dimensions is more composable than a rigid OBT.
- Very wide fact tables where pre-joining every dimension would create tables with 500+ columns. At some point, the OBT becomes unwieldy even for columnar storage.
When to Use Pure OBT
Pure OBT shines when:
- Dashboard performance is the priority and your BI tool doesn’t handle joins well (many self-service BI tools fall into this category)
- The consumer is non-technical and can’t write or debug joins
- Reverse ETL destinations expect flat, denormalized inputs
- The entity count is small — if you’re joining 3-4 dimensions, the OBT is manageable and the performance gain is significant
Column Count and Practical Limits
Wide tables on BigQuery can technically have up to 10,000 columns. In practice, OBTs beyond 200-300 columns start to have ergonomic problems — they’re hard to navigate in query editors, column names become ambiguous, and documentation becomes essential rather than optional.
If your OBT is approaching 300+ columns, consider splitting it into domain-specific OBTs: mrt__sales__opportunity_wide, mrt__support__case_wide. Each serves a specific consumer group with only the columns they need. This preserves the query performance benefit of OBT without the ergonomic downsides of a monolithic table.
Storage Considerations
OBT does increase storage due to dimension attribute duplication. An opportunity OBT that includes account name, industry, and owner name on every row duplicates those strings across every opportunity. On BigQuery, columnar storage with automatic compression mitigates this significantly — repeated string values compress extremely well.
For most teams, the storage cost increase from OBT is negligible compared to the compute cost savings from eliminating runtime joins. Run the numbers for your specific case, but storage is rarely the constraint.