The lookback window determines how far back to consider touchpoints before a conversion. It’s the INTERVAL N DAY in the WHERE clause of every attribution query. A window shorter than the actual purchase cycle excludes touchpoints that influenced the conversion; a window longer than necessary attributes credit to marketing with no causal relationship to the outcome.
Why the window matters
Every attribution model — first-touch, last-touch, linear, position-based, time-decay — operates within a lookback window. The window defines the universe of touchpoints eligible for credit. A 7-day window on a product with a 45-day consideration phase will miss the awareness touchpoints that started the journey. A 180-day window on an impulse purchase will credit a display ad from six months ago that the customer doesn’t remember seeing.
The lookback window is a statement about the customer’s decision-making timeline. A window misaligned with actual purchase cycles models a journey that doesn’t reflect reality.
Industry benchmarks
Purchase cycles vary dramatically by industry, and the lookback window should match:
| Industry | Recommended Window | Reasoning |
|---|---|---|
| E-commerce (impulse) | 7-14 days | Fast purchase decisions, short consideration |
| E-commerce (considered) | 30-45 days | Research phase before higher-value purchases |
| SaaS (self-serve) | 14-30 days | Trial-to-paid typically completes within a month |
| B2B Mid-market | 90-180 days | Multi-stakeholder decisions, demo-to-close cycles |
| B2B Enterprise | 180-365 days | Long sales cycles with multiple champions and committees |
These are starting points, not rules. Your actual purchase cycle data should drive the final number. Query your conversion data to find the distribution of time between first known touchpoint and conversion:
WITH first_touches AS ( SELECT user_id, transaction_id, MIN(touchpoint_timestamp) AS first_touch_at, MAX(conversion_timestamp) AS converted_at FROM touchpoints GROUP BY user_id, transaction_id)
SELECT APPROX_QUANTILES( TIMESTAMP_DIFF(converted_at, first_touch_at, DAY), 100 )[OFFSET(50)] AS median_days, APPROX_QUANTILES( TIMESTAMP_DIFF(converted_at, first_touch_at, DAY), 100 )[OFFSET(90)] AS p90_days, APPROX_QUANTILES( TIMESTAMP_DIFF(converted_at, first_touch_at, DAY), 100 )[OFFSET(95)] AS p95_daysFROM first_touchesSet your lookback window at the P90 or P95 of your first-touch-to-conversion time. This captures the vast majority of legitimate journeys while filtering out noise from ancient touchpoints.
Consequences of wrong windows
Too short
When the lookback window is shorter than your actual purchase cycle, you systematically exclude early-funnel touchpoints. The effects:
- First-touch attribution becomes meaningless. If awareness touchpoints fall outside the window, “first touch” is really “first touch within an arbitrary time boundary.” The channel that actually introduced the customer gets no credit.
- Discovery channels appear weak. Content marketing, organic search, brand campaigns — anything that operates at the top of the funnel gets under-credited because those interactions happened before the window opened.
- Journey lengths look artificially short. Multi-touch models (linear, position-based) see fewer touchpoints per conversion, which concentrates credit on the few touches that survive the window cutoff.
- Unattributed conversions increase. Conversions with no qualifying touchpoints in the window appear as “direct” or “unattributed,” inflating the direct channel’s apparent contribution.
Too long
When the lookback window is longer than your actual purchase cycle, you include touchpoints that have no causal relationship to the conversion. The effects:
- Old touchpoints dilute credit. A display ad impression from four months ago gets credit alongside last week’s branded search click, even though the customer forgot about the display ad months ago.
- Linear attribution suffers most. With more touchpoints in the window, each one gets a smaller share of credit. The high-impact recent touchpoints get diluted by irrelevant old ones.
- Awareness channels appear disproportionately strong. Upper-funnel channels that generate lots of early impressions accumulate touchpoints over long windows, inflating their attributed revenue relative to their actual influence.
- Computation costs increase. Longer windows mean more touchpoint-conversion joins. In BigQuery, this directly affects bytes scanned and slot usage. A 365-day window on a high-traffic site can produce an order of magnitude more rows than a 30-day window.
Platform defaults for comparison
When you build attribution in your warehouse, you’re replacing the platform’s built-in window with your own. Knowing what the platforms use helps you understand why your numbers differ from theirs:
| Platform | Default Click Window | Default View Window |
|---|---|---|
| Google Ads | 30 days | None (last-click) |
| Meta Ads | 7 days | 1 day |
| LinkedIn Ads | 30 days | 7 days |
| TikTok Ads | 7 days | 1 day |
The platform windows are designed to maximize the platform’s attributed conversions. Your warehouse window should be designed to reflect your actual customer journey.
Note that Meta’s attribution windows were significantly shortened after iOS 14.5. Pre-ATT, Meta used 28-day click and 7-day view as defaults. The reduction to 7-day click and 1-day view means Meta now claims fewer conversions than it would have historically, which makes cross-platform comparison even more complicated.
Implementation in SQL
The lookback window appears as a filter in the touchpoint table join:
FROM touchpoint_events tJOIN conversion_events c ON t.user_pseudo_id = c.user_pseudo_id AND t.event_timestamp <= c.event_timestamp AND t.event_timestamp >= TIMESTAMP_SUB( c.event_timestamp, INTERVAL 30 DAY -- your lookback window )To make the window configurable in dbt, use a project variable:
-- dbt_project.yml-- vars:-- attribution_lookback_days: 30
AND t.event_timestamp >= TIMESTAMP_SUB( c.event_timestamp, INTERVAL {{ var('attribution_lookback_days') }} DAY)This lets you change the window without modifying model SQL. You can also run attribution with multiple windows to see how sensitive your results are to the choice — if rankings shift dramatically between 30-day and 60-day windows, your window selection is doing more analytical work than you might want it to.
Testing window sensitivity
Before committing to a window, test how it affects your results. Run the same attribution model (linear is a good choice since it uses all touchpoints) with several window lengths and compare channel rankings:
-- Run for 7, 14, 30, 60, 90 day windows-- Compare: which channels gain/lose share as the window expands?If your top 3 channels are the same across all windows, the window choice doesn’t matter much for your business decisions. If a channel jumps from #2 at 14 days to #7 at 90 days, you’ve discovered that channel’s contribution is window-sensitive — it operates primarily in a specific part of the journey timeline.
This sensitivity analysis is itself a form of model disagreement. Just as disagreement between models reveals uncertainty in credit assignment, disagreement across windows reveals uncertainty in journey scoping.
When to revisit your window
Don’t set it and forget it. Revisit the lookback window when:
- Your product or pricing changes. Higher prices typically lengthen consideration phases. A product that moved from $50 to $500 probably needs a longer window.
- You enter a new market segment. B2B enterprise sales cycles are fundamentally different from self-serve signups, even for the same product.
- Seasonal patterns shift. Holiday impulse purchases may warrant a shorter window than your annual default.
- Your conversion data tells you to. Run the percentile query above quarterly. If P90 time-to-conversion is drifting, your window should drift with it.