ServicesAboutNotesContact Get in touch →
EN FR
Note

Triangulated Marketing Measurement

Why resilient marketing measurement combines three approaches -- multi-touch attribution for daily optimization, media mix modeling for strategic allocation, and incrementality testing for causal validation

Planted
bigqueryga4analytics

No single measurement methodology captures the full picture of marketing effectiveness. Multi-touch attribution can’t measure causation. Media mix modeling can’t optimize daily tactics. Incrementality testing can’t run continuously on every channel. Each approach has structural blind spots that the others fill.

The most resilient measurement strategies don’t rely on one methodology. They triangulate — using three complementary approaches that together cover what any one approach misses.

The three legs

Multi-touch attribution (MTA)

Multi-touch attribution assigns conversion credit to individual touchpoints in a customer journey. It operates on user-level data, provides granular channel and campaign analysis, and supports near-real-time optimization.

What it’s good for: Daily digital channel optimization. Which campaigns are driving conversions this week? Which channels should get more budget in the next sprint? Where are the conversion paths breaking down?

What it can’t do: Prove causation. MTA observes that a touchpoint was present in a converting journey and assigns credit based on position, timing, or statistical patterns. But presence in a journey doesn’t prove the touchpoint caused the conversion. A retargeting ad shown to someone who was already about to purchase gets full credit under last-touch, even though the conversion would have happened without it.

MTA also can’t measure offline channels. TV spots, podcast sponsorships, billboards, and event marketing don’t generate clickable touchpoints with UTM parameters. They’re invisible to attribution models, which means MTA systematically under-credits offline marketing.

Media mix modeling (MMM)

Media mix modeling takes a fundamentally different approach. Instead of tracking individual user journeys, MMM uses aggregated, time-series data — total spend per channel per week, total conversions per week, plus external factors like seasonality, economic conditions, and competitor activity — to estimate each channel’s contribution through statistical regression.

What it’s good for: Quarterly strategic budget allocation. How should we split our total marketing budget across channels next quarter? What’s the diminishing return curve for each channel? If we increase total spend by 20%, where do we get the best marginal return?

What it can’t do: Optimize daily tactics. MMM operates on weekly or monthly aggregates. It can tell you that paid social drives 18% of incremental revenue over the quarter, but it can’t tell you that yesterday’s campaign targeting lookalike audiences outperformed the retargeting campaign.

MMM’s strength is that it works with aggregated data, making it inherently privacy-safe. It doesn’t need user-level tracking, cookies, or consent parameters. It includes offline channels naturally — TV spend goes into the regression alongside digital spend. And it captures diminishing returns: the insight that spending $100K on paid search drives 10x return while spending $200K only drives 6x return.

The tradeoff is data requirements. MMM needs 2-3 years of historical spend and outcome data, preferably with variation in spend levels across channels. If you’ve spent the same amount on every channel for the past year, MMM can’t distinguish their contributions.

Incrementality testing

Incrementality testing measures causation through controlled experiments. Holdout tests suppress ads to a portion of the audience and measure whether conversions actually drop. Geographic tests turn off channels in specific markets. Platform lift studies use built-in experimentation tools.

What it’s good for: Proving that a channel drives real incremental value, not just capturing demand that would have converted anyway. This is the closest thing to ground truth in marketing measurement.

What it can’t do: Run continuously on every channel. Each test requires suppressing ads to a portion of your audience, which means forgoing potential conversions during the test period. Testing is expensive, and you can only test a few channels at a time.

How the three approaches calibrate each other

The real value of triangulation isn’t running three separate analyses and picking the one you like best. It’s using each approach to calibrate the others.

Incrementality calibrates MTA. Your attribution model says email drives 15% of conversions. A holdout test shows only 8% incremental lift. This tells you that email is present in 15% of converting journeys but only causally responsible for about half of them. The rest would have converted anyway. You don’t “fix” the attribution model’s weights — you develop intuition about where it over- and under-credits, and factor that into budget decisions.

MTA informs incrementality test priorities. Channels with high disagreement scores — where different attribution models produce wildly different results — are the best candidates for incrementality testing. The disagreement signals genuine uncertainty about a channel’s contribution. An incrementality test resolves that uncertainty with causal evidence.

MMM validates MTA at the portfolio level. If your attribution models say paid search drives 30% of conversions and MMM says paid search drives 28% of incremental revenue, you have convergence. If MTA says 30% and MMM says 12%, investigate the gap. It might mean paid search is capturing existing demand (MTA sees the touchpoint) without creating new demand (MMM doesn’t see the incremental lift).

Incrementality calibrates MMM. If your media mix model estimates diminishing returns for display advertising at $50K/month, an incrementality test that turns off display in a few markets can validate whether the actual lift matches the model’s prediction. If it does, you trust the model’s curve. If it doesn’t, you retrain.

Matching methodology to maturity

Not every team needs all three legs from day one. Match your measurement approach to your current scale:

StageRecommended Approach
Early / SMBHeuristic attribution models (linear, time-decay) in SQL. This alone provides more insight than relying on platform-reported numbers.
Growth (500+ monthly conversions)Add a [[Markov Chain Attribution
Scale (1000+ conversions)Add [[Shapley Value Attribution
Enterprise (Google ecosystem)Google DDA + custom warehouse validation to detect [[Google DDA Silent Fallback
Enterprise (full control)Custom Markov/Shapley, ongoing incrementality testing program, media mix modeling with quarterly refresh. Full triangulated measurement.

Start with heuristic models. They’re transparent, easy to explain to stakeholders, and often sufficient for early optimization. Add data-driven approaches when you have the data volume to support them and the questions that require them. Add incrementality testing when you have the budget and the channels with enough uncertainty to justify the cost of testing. Add MMM when you have enough historical data and the strategic budget allocation questions that demand it.

Limitations

No attribution model provides complete accuracy. Customer journeys are complex, and channels interact in ways that resist clean measurement. A customer influenced by a billboard, organic search, blog posts, email, and a retargeting ad cannot be perfectly decomposed by any model.

Attribution is more useful when it is transparent and defensible than when it is precise but opaque. Running multiple models, validating with experiments, and communicating uncertainty alongside results supports better budget decisions than relying on a single platform-reported number.