Looker Studio’s caching layer sits between the viewer and BigQuery. Understanding how it works changes how you configure date ranges, credential modes, and refresh schedules — decisions that look cosmetic but have significant impact on query volume and cost.
How the Cache Works
Each chart component in Looker Studio has its own independent cache. The cache key is a combination of:
- Dimensions and metrics in the chart
- Date range applied to the chart
- Active filter values
Any change to any of these parameters is a cache miss, which triggers a new BigQuery query. A dashboard with 10 charts where someone changes the campaign filter generates 10 new queries — one per chart — because each chart’s cache key now includes the new filter value.
The default data freshness setting for BigQuery connections is 12 hours. This means Looker Studio won’t re-query BigQuery for the same chart with the same parameters within a 12-hour window. You can configure this down to 1 minute (for near-real-time data requirements) or set it longer. The longer the freshness window, the more queries are served from cache rather than BigQuery.
Why “Last 30 Days” Is More Expensive Than “This Month”
This is a subtle but high-impact detail. Dynamic date ranges like “Last 30 days” or “Last 7 days” generate different queries every day because the date boundaries shift. Today “Last 30 days” means February 26 - March 27. Tomorrow it means February 27 - March 28. Different date boundaries = different cache key = cache miss every day.
Fixed period ranges like “This month” or “This quarter” keep the same boundaries until the period changes. “This month” stays March 1 - March 27 all day until March ends. Any viewer who opens the same dashboard today gets a cache hit after the first person loads it (assuming credentials match — more on that below).
The practical implication: if your dashboard has a default date range of “Last 30 days” and 50 people open it over the course of a business day, the first person gets a cache miss (1 BigQuery query per chart), and everyone else gets a cache hit. But the next morning, the date boundaries shift, the cache is stale, and the first person of the day triggers fresh queries again.
With “This month,” the cache stays warm for the entire month. Anyone who opens the dashboard on March 15 benefits from the cache built by anyone who opened it on March 1.
Use fixed date boundaries wherever the business use case allows it. Where users genuinely need rolling windows, accept the daily cache miss as the cost of the feature.
Owner’s Credentials vs. Viewer’s Credentials
This setting has the most impact on cache behavior and should inform your dashboard architecture.
Owner’s credentials: All viewers access BigQuery through the dashboard owner’s identity. One cache is shared across all viewers — more efficient, fewer total queries, lower BigQuery costs. The first person to open the dashboard with a given filter combination pays the query cost; everyone else shares that cache hit.
Viewer’s credentials: Each viewer’s session creates its own cache, separate from every other viewer. If 50 people open the same dashboard, there are up to 50 independent caches. The first person from each session gets a cache miss. This generates significantly more BigQuery queries than owner’s credentials.
The trade-off is access control. Owner’s credentials means every viewer sees exactly the data the owner can see, with no row-level filtering by viewer identity. If the dashboard shows data everyone should be able to see (aggregate website metrics, company-wide KPIs), owner’s credentials is the right choice.
Viewer’s credentials is necessary when different viewers should see different data: each sales rep seeing only their accounts, each regional manager seeing only their region. The higher query volume and cost are the price of that per-viewer filtering. In this case, consider whether BigQuery row access policies are a cleaner implementation than credential-based separation.
There’s also an organizational dependency risk with owner’s credentials: if the credential owner leaves the organization, every report using their credentials breaks simultaneously. Looker Studio Pro mitigates this with organizational ownership, but on the free tier, use a dedicated service account as the credential owner rather than a personal account.
Pre-Warming the Cache
The first viewer of a dashboard after data refreshes always gets a cache miss. For dashboards used by executive teams or shared widely at the start of the business day, that first-loader experience matters. Pre-warming solves it.
The pattern: after your scheduled data refresh completes, open the dashboard before the first real user of the day. Trigger the common filter combinations — the default date range, the most-used segment filters. Looker Studio populates the cache for each chart. The next 50 viewers see cached responses.
You can automate this with a headless browser script or a simple curl-equivalent that hits the report URL. The automation doesn’t need to do anything with the result; it just needs to trigger the queries so the cache gets populated. Schedule it to run 5-10 minutes after your BigQuery pipeline finishes.
A more sophisticated version: identify the 5-10 filter combinations that account for 80% of dashboard usage (check INFORMATION_SCHEMA.JOBS for the query patterns), and pre-warm specifically those combinations.
Configuring Cache Duration
Data freshness is configurable per data source in Looker Studio. Access it through the data source settings:
- 12 hours (default): Appropriate for daily-updated data. Balances freshness against query reduction.
- 1 hour: Useful when data updates multiple times daily and viewers expect same-day data.
- 1 minute: Near-real-time, but essentially disables caching for most use cases. Every viewer will trigger queries.
- Custom intervals: Set to align with your pipeline schedule. If your dbt pipeline runs at 6 AM and 6 PM, a 12-hour freshness window ensures viewers always see data from the last completed run.
Set the freshness window to the longest acceptable staleness for each data source. For a KPI dashboard showing yesterday’s sales, 24 hours is fine. For an operational monitoring board, 1 hour or less makes sense. Mismatched freshness settings (an hourly refresh on daily-updated data) just waste queries checking for freshness that doesn’t exist.
Cache Interaction with BI Engine
Looker Studio’s result cache and BI Engine serve different purposes and work together rather than competing:
- Looker Studio cache: Stores query results. Returns identical results for identical queries without touching BigQuery at all.
- BI Engine: Accelerates queries that reach BigQuery. Processes them from in-memory data rather than storage.
A cache hit in Looker Studio is better than a BI Engine hit, because the cache hit doesn’t run any BigQuery query. But for cache misses — new filter combinations, first loads after refresh, viewer’s credential scenarios — BI Engine ensures the query that does run completes in milliseconds rather than seconds.
The two layers are complementary. Optimize Looker Studio cache behavior (fixed date ranges, appropriate freshness windows, owner’s credentials where possible) to reduce total query volume. Enable BI Engine to minimize the latency and cost of the queries that do reach BigQuery.