Time-decay attribution assigns more credit to touchpoints that occurred closer to the conversion. The logic is straightforward: a touchpoint from yesterday matters more than one from two weeks ago. Unlike position-based models that care about where a touchpoint falls in the sequence, time-decay cares about when it happened relative to conversion.
The exponential decay formula
Time-decay uses exponential decay with a half-life parameter:
Weight(touchpoint) = 2^(-days_before_conversion / half_life)The half-life determines how quickly credit diminishes. With a 7-day half-life:
- Day 0 (conversion day): Weight = 1.0 (100%)
- Day 7: Weight = 0.5 (50%)
- Day 14: Weight = 0.25 (25%)
- Day 21: Weight = 0.125 (12.5%)
Google Analytics uses a 7-day half-life by default. This works reasonably well for e-commerce but may be too aggressive for B2B, where a touchpoint from three weeks ago can be the one that actually planted the seed.
The formula produces raw weights that need normalization — dividing each weight by the sum of all weights for that conversion — so that attributed revenue sums to actual revenue.
Choosing the right half-life
Half-life should roughly match your typical sales cycle length. Too short and you under-credit early-funnel channels that drove awareness. Too long and you approach linear attribution, defeating the purpose of using time-decay in the first place.
| Industry | Half-Life | Lookback Window |
|---|---|---|
| B2C E-commerce (impulse) | 3-7 days | 7-14 days |
| B2C E-commerce (considered) | 7-14 days | 30-45 days |
| B2B Mid-Market | 14-30 days | 90-180 days |
| B2B Enterprise | 30-45 days | 180+ days |
The lookback window and the half-life work together. The lookback window determines which touchpoints qualify at all (anything outside the window is excluded). The half-life determines how steeply credit drops for touchpoints within the window. A 30-day lookback with a 7-day half-life means touchpoints from 3-4 weeks ago are included but receive very little credit.
BigQuery implementation
The implementation calculates raw weights, then normalizes so they sum to 1:
WITH decay_weights AS ( SELECT user_id, transaction_id, channel, revenue, conversion_timestamp, touchpoint_timestamp, POW( 0.5, TIMESTAMP_DIFF(conversion_timestamp, touchpoint_timestamp, MINUTE) / (7.0 * 24 * 60) ) AS raw_weight FROM touchpoints WHERE touchpoint_timestamp >= TIMESTAMP_SUB( conversion_timestamp, INTERVAL 30 DAY )),normalized AS ( SELECT *, SUM(raw_weight) OVER ( PARTITION BY user_id, transaction_id ) AS total_weight FROM decay_weights)SELECT user_id, transaction_id, channel, (raw_weight / total_weight) * revenue AS attributed_revenueFROM normalizedWhat each piece does:
TIMESTAMP_DIFF(..., MINUTE)calculates the time gap in minutes for precision — using days would lose intra-day granularity7.0 * 24 * 60converts the 7-day half-life to minutes (10,080 minutes)POW(0.5, time_ratio)applies exponential decay- Dividing by
total_weightnormalizes so all weights sum to 1, ensuring attributed revenue equals actual revenue
For a customer with touchpoints at day -14, day -7, and day 0:
- Day -14: raw_weight = 0.25, normalized = 0.25/1.75 = 14.3%
- Day -7: raw_weight = 0.5, normalized = 0.5/1.75 = 28.6%
- Day 0: raw_weight = 1.0, normalized = 1.0/1.75 = 57.1%
The most recent touchpoint gets more than half the credit, which matches the model’s assumption that recency correlates with impact.
Parameterizing the half-life
Hard-coding the half-life makes experimentation difficult. Use a BigQuery variable or a configuration table:
DECLARE half_life_days FLOAT64 DEFAULT 7.0;
WITH decay_weights AS ( SELECT user_id, transaction_id, channel, revenue, conversion_timestamp, touchpoint_timestamp, POW( 0.5, TIMESTAMP_DIFF(conversion_timestamp, touchpoint_timestamp, HOUR) / (half_life_days * 24) ) AS raw_weight FROM touchpoints WHERE touchpoint_timestamp >= TIMESTAMP_SUB( conversion_timestamp, INTERVAL 30 DAY ))-- ... rest of queryThis lets you test different half-life values and compare results. Running the same query with half-life values of 3, 7, 14, and 30 days reveals how sensitive your channel rankings are to the decay parameter. If rankings stay stable, the parameter choice doesn’t matter much. If they shift dramatically, you need to invest more thought into getting the half-life right — or accept that the uncertainty is real.
In dbt, use var() instead of DECLARE to make the half-life configurable in dbt_project.yml. See dbt Weighted Attribution Models for the implementation.
When to use time-decay attribution
Time-decay works best when:
- Recent touchpoints demonstrably influence conversion more than older ones
- You’re optimizing for immediate conversion impact
- Sales cycles vary widely in length (position-based struggles here because “first” in a 2-day journey is very different from “first” in a 90-day journey)
- You want to weight urgency and recency into the model
Time-decay is less appropriate when early-funnel brand awareness is a priority. A brand campaign from three weeks ago that planted the initial awareness gets very little credit under time-decay, even though it may have been the most important touchpoint in the journey. If you suspect that’s happening, position-based attribution gives early touches more weight.
The most defensible approach is running both and comparing results. When position-based and time-decay produce similar channel rankings, you have higher confidence in the findings. When they diverge significantly, that’s a signal to investigate why — and potentially a candidate for incrementality testing.