Comparing Attribution Models: A dbt + Looker Studio Dashboard

Different attribution models tell different stories about your marketing. First-touch credits the channel that introduced the customer. Last-touch credits the channel that closed the deal. Linear spreads credit evenly. Each model is correct within its own logic, and each model is biased.

Running multiple attribution models in parallel won’t give you the “right” answer, but it will show you where your models agree, where they disagree, and what that disagreement tells you about uncertainty in your data. For a broader look at why warehouse-based attribution matters, see my attribution guide.

Why run multiple attribution models

Every attribution model has a built-in bias. First-touch over-credits discovery channels like organic search and content marketing. Last-touch over-credits conversion channels like branded search and retargeting. Linear treats a random display impression the same as a demo request email.

When you rely on a single model, you inherit its blind spots. Marketing teams optimize toward whatever the model rewards. If the model systematically under-credits certain channels, those channels get defunded regardless of their actual contribution.

Running models in parallel solves this by making the biases visible. When first-touch says organic search drives 40% of revenue and last-touch says it drives 8%, you know the true number is somewhere in between. The disagreement itself is informative: it tells you that organic search plays a role in customer journeys, but its exact contribution is uncertain.

This approach also enables strategy alignment across teams. Awareness-focused teams can use first-touch to optimize discovery. Conversion teams can use last-touch to optimize closing. Leadership can look at the range across models to understand the full picture.

When models agree, you have high confidence in the insight. When they disagree significantly, interpret the data with caution and consider running an incrementality test to find ground truth.

Designing a multi-model dbt project

A multi-model attribution system needs clean separation between the attribution logic and the comparison layer. Each model runs independently, and a final comparison model unions them together for dashboard consumption. (For more on structuring dbt projects in general, see my dedicated guide.)

Project structure for attribution

models/
├── base/
│ └── base__ga4__events.sql # Raw event cleaning
├── intermediate/
│ ├── int__events_sessionized.sql # Sessionization
│ ├── int__sessions_enriched.sql # Marketing touchpoints
│ └── int__touchpoints_pathed.sql # User journey paths
└── marts/
└── attribution/
├── mrt__attribution__first_touch.sql
├── mrt__attribution__last_touch.sql
├── mrt__attribution__linear.sql
├── mrt__attribution__position_based.sql
├── mrt__attribution__time_decay.sql
├── mrt__attribution__conversions.sql
└── mrt__attribution__comparison.sql

Each mrt__attribution__* model implements one attribution approach and outputs a consistent schema: conversion__id, touchpoint__channel, touchpoint__attributed_revenue, and any additional dimensions you need for analysis (campaign, converted_at, user segment).

The mrt__attribution__comparison model unions all the attribution models together with a model_type column that identifies which model generated each row.

The comparison model pattern

The comparison model unions individual models and adds an identifier:

-- mrt__attribution__comparison.sql
WITH first_touch AS (
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
'first_touch' AS model_type
FROM {{ ref('mrt__attribution__first_touch') }}
),
last_touch AS (
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
'last_touch' AS model_type
FROM {{ ref('mrt__attribution__last_touch') }}
),
linear AS (
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
'linear' AS model_type
FROM {{ ref('mrt__attribution__linear') }}
),
position_based AS (
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
'position_based' AS model_type
FROM {{ ref('mrt__attribution__position_based') }}
),
time_decay AS (
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
'time_decay' AS model_type
FROM {{ ref('mrt__attribution__time_decay') }}
)
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
model_type
FROM first_touch
UNION ALL
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
model_type
FROM last_touch
UNION ALL
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
model_type
FROM linear
UNION ALL
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
model_type
FROM position_based
UNION ALL
SELECT
conversion__id,
touchpoint__channel,
touchpoint__attributed_revenue,
conversion__converted_at,
model_type
FROM time_decay

This produces one row per conversion per channel per model. A single conversion with three touchpoints generates 15 rows (3 touchpoints × 5 models). The model_type column becomes the filter in Looker Studio.

Add a dbt test to verify that attributed revenue sums to total revenue for each model:

schema.yml
models:
- name: mrt__attribution__comparison
tests:
- dbt_utils.expression_is_true:
expression: "SUM(touchpoint__attributed_revenue) = (SELECT SUM(conversion__revenue) FROM {{ ref('mrt__attribution__conversions') }})"
group_by_columns: ['model_type']

Building the Looker Studio dashboard

With the comparison model in place, Looker Studio can pull directly from BigQuery and let users switch between attribution models.

Essential metrics to track

Channel contribution by model: The primary view. A bar chart showing attributed revenue by channel, with the attribution model as a dropdown filter. Users switch between first-touch, last-touch, position-based, time-decay, and other models to see how credit shifts.

Model comparison view: A grouped bar chart showing all models side by side for each channel. This immediately highlights where models diverge. The channels with the widest spread between bars are the ones where your attribution is most uncertain.

Assisted conversions: Touchpoints that contributed to a conversion but weren’t the credited touchpoint under single-touch models. Calculate this as the count of touchpoints in a conversion path minus one. High assisted conversion rates indicate channels that nurture rather than close.

CPA and ROAS by model: Cost per acquisition and return on ad spend, calculated for each attribution model. The formula changes based on which model’s revenue you use in the denominator.

Path length distribution: A histogram showing how many touchpoints conversions typically include. Shorter paths favor single-touch models; longer paths make multi-touch models more important.

Dashboard hierarchy for different audiences

Not everyone needs the same level of detail. Structure your dashboards around decision-making needs:

Executive dashboard (CMO, CEO): Marketing’s bottom-line impact. Total attributed revenue, revenue by channel at a high level, overall ROI, customer acquisition cost. Weekly or monthly cadence. No model selection; show a blended view or the company’s primary model.

Manager dashboard: Channel performance by model, campaign-level breakdowns, budget allocation recommendations. Weekly cadence. Include the model selector so managers can stress-test their channel’s performance under different assumptions.

Analyst dashboard: Granular touchpoint analysis, path exploration, model comparison details, conversion lag analysis. Daily access. Full access to all filters and dimensions for deep investigation.

Practical Looker Studio tips

Connect to the mrt__attribution__comparison table in BigQuery. Create a dropdown control bound to the model_type field and add it to every page. This gives users one-click switching between models.

For the comparison view, use a pivot table with channel as rows, model_type as columns, and SUM(touchpoint__attributed_revenue) as the metric. This creates a side-by-side comparison without complex chart configuration.

Pre-aggregate data in dbt when possible. Looker Studio performs better with fewer rows, and BigQuery charges for bytes scanned. Create a summary table that aggregates by day, channel, and model rather than querying the full touchpoint-level data.

Looker Studio limitations and workarounds

Looker Studio is free and integrates well with BigQuery, but it has constraints you’ll need to work around.

5,000 row visualization limit: Charts can only display 5,000 rows. For touchpoint-level data, this limit hits quickly. Pre-aggregate to day/channel level in dbt rather than relying on Looker Studio to aggregate on render.

No native Sankey diagrams: Path visualization is a common request for attribution dashboards, but Looker Studio doesn’t support Sankey charts. Workarounds include using a third-party community visualization, building a path analysis table (sequence of channels per conversion), or moving path visualization to a different tool.

Limited drill-through: You can’t easily click a channel and drill into its campaigns, then into individual conversions. Work around this with linked dashboards: add a filter that passes channel to a detail page.

No version control: Changes aren’t tracked. Document dashboard changes manually, or maintain a changelog in Notion or Confluence.

When to consider alternatives: Tableau (~$70/user/month) offers better path visualization and drill-through. Metabase (free self-hosted) works well if you’re already running it. Power BI ($10-20/user/month) integrates with Microsoft ecosystems. Each has tradeoffs; Looker Studio remains the path of least resistance for BigQuery shops.

Validating your attribution models

Attribution models are estimates. Validation ensures those estimates are directionally useful.

Using model disagreement as a signal

When models agree on a channel’s contribution, you can communicate with confidence. “Paid search drives 25-28% of revenue across all models” is a strong statement.

When models disagree sharply, flag it for stakeholders. “Organic social drives between 5% and 22% of revenue depending on the model” tells leadership that this channel’s contribution is genuinely uncertain. That kind of honesty about the limits of the data is more valuable than false precision.

Calculate a disagreement score for each channel: the standard deviation of attributed revenue percentages across models. Channels with high disagreement are candidates for incrementality testing.

Incrementality testing as ground truth

Attribution models measure correlation. Incrementality tests measure causation. When you need to know whether a channel actually drives conversions (versus just being present in converting journeys), run an experiment.

Holdout tests: Exclude 10% of your audience from seeing ads on a specific channel. Compare conversion rates between the exposed and holdout groups. The difference is the channel’s incremental contribution.

Geo tests: For channels that can’t be user-targeted (TV, radio, billboards), run the test geographically. Turn off the channel in a set of markets and compare to control markets.

Platform lift studies: Meta, Google, and TikTok all offer Conversion Lift studies that handle the holdout mechanics for you. The tradeoff is that you’re trusting the platform’s measurement.

Use incrementality results to calibrate your attribution models. If Markov attribution says email drives 15% of conversions but a holdout test shows 8% incremental lift, adjust your interpretation accordingly.

Warning signs your attribution is failing

Watch for these patterns:

Attributed revenue doesn’t match actual revenue: If your models attribute $100K to marketing but finance sees $80K in the bank, something is wrong. Check your conversion tracking, how you’re joining touchpoints to conversions, and your definition of “conversion.”

Margin erosion despite high ROAS: If a channel shows strong return on ad spend but scaling it doesn’t grow profits proportionally, the attribution is likely over-crediting the channel.

Untraceable deals: In B2B, if more than 60% of closed deals can’t be attributed to any marketing activity, your tracking is incomplete. Either touchpoints aren’t being captured or your identity resolution isn’t connecting anonymous sessions to known users.

Communicating results to stakeholders

The technical work is wasted if stakeholders can’t act on it.

Lead with outcomes, not methodology: “Content marketing influences 47% of our conversions” lands better than “Our Markov chain attribution model with 30-day lookback windows shows…”

Present ranges, not false precision: “Paid search drives $50K-$80K monthly depending on the attribution model” is more honest than picking one number. The range communicates confidence level without requiring a statistics lesson.

Highlight agreement and disagreement explicitly: “All models agree that email is our most efficient channel. Models disagree on display’s contribution, so we recommend an incrementality test before changing display spend.”

Structure recommendations clearly: Challenge → Insight → Recommendation → Outcome.


No single attribution model shows the full picture, but running several in parallel surfaces where the uncertainty lives. Focus on where they agree, dig into where they disagree, and use the disagreement to drive better decisions rather than chasing perfect attribution.