An attribution dashboard’s job is to make model outputs actionable. The underlying data comes from a comparison model that unions multiple attribution models with a model_type discriminator. The dashboard layer adds interactivity — model switching, audience-appropriate views, and the visual context that turns numbers into decisions.
Audience segmentation is the primary design decision: executives need directional answers from a single model; managers need model-switching to stress-test channel performance; analysts need full granularity for path investigation. Showing the same view to all three produces a dashboard that serves none.
Essential metrics
Channel contribution by model
The primary view. A bar chart showing attributed revenue by channel, with the attribution model as a dropdown filter. Users switch between first-touch, last-touch, linear, position-based, time-decay, and other models to see how credit shifts across channels.
This single view communicates the most important insight: which channels contribute the most, and how stable is that answer across different modeling assumptions? If the rankings stay roughly the same across models, confidence is high. If a channel jumps from #2 under first-touch to #7 under last-touch, that instability is the story.
Model comparison view
A grouped bar chart showing all models side by side for each channel. This immediately highlights where models diverge. The channels with the widest spread between bars are the ones where attribution is most uncertain — the ones worth investigating with incrementality tests.
This view is the visual representation of the disagreement-as-signal concept. The spread itself is the insight.
Assisted conversions
Touchpoints that contributed to a conversion but weren’t the credited touchpoint under single-touch models. Calculate this as the count of touchpoints in a conversion path minus one:
WITH path_lengths AS ( SELECT conversion__id, touchpoint__channel, COUNT(*) OVER ( PARTITION BY conversion__id ) AS total_touchpoints_in_path FROM {{ ref('mrt__attribution__first_touch') }})
SELECT touchpoint__channel, COUNT(DISTINCT conversion__id) AS total_conversions, COUNT(DISTINCT CASE WHEN total_touchpoints_in_path > 1 THEN conversion__id END) AS assisted_conversions, SAFE_DIVIDE( COUNT(DISTINCT CASE WHEN total_touchpoints_in_path > 1 THEN conversion__id END), COUNT(DISTINCT conversion__id) ) AS assist_rateFROM path_lengthsGROUP BY touchpoint__channelHigh assisted conversion rates indicate channels that nurture rather than close. These channels look weak under last-touch attribution but may be essential to the customer journey. This metric gives those channels a way to demonstrate their contribution outside the single-touch models that ignore them.
CPA and ROAS by model
Cost per acquisition and return on ad spend, calculated for each attribution model. The formula changes based on which model’s revenue you use in the denominator:
SELECT c.model_type, c.touchpoint__channel, SUM(c.touchpoint__attributed_revenue) AS attributed_revenue, s.total_spend, SAFE_DIVIDE(SUM(c.touchpoint__attributed_revenue), s.total_spend) AS roas, SAFE_DIVIDE(s.total_spend, COUNT(DISTINCT c.conversion__id)) AS cpaFROM {{ ref('mrt__attribution__comparison') }} cLEFT JOIN {{ ref('mrt__marketing__channel_spend') }} s ON c.touchpoint__channel = s.channel AND DATE(c.conversion__converted_at) = s.spend_dateGROUP BY c.model_type, c.touchpoint__channel, s.total_spendJoining attributed revenue with actual spend data from the ad platforms gives you ROAS calculated on neutral ground — same attribution methodology applied consistently across all channels, using actual spend rather than platform-reported metrics.
Path length distribution
A histogram showing how many touchpoints conversions typically include. Short paths (1-2 touchpoints) favor single-touch models — first-touch and last-touch give the same answer when there’s only one touchpoint. Longer paths (5+ touchpoints) make multi-touch models more important because single-touch models ignore the majority of the customer journey.
This distribution informs model selection. If 80% of your conversions have single-touchpoint paths, the difference between models is minimal and you can simplify your approach. If the median path length is 6 touchpoints, multi-touch models provide substantially different (and arguably more informative) results.
Audience-tiered dashboard hierarchy
Not everyone needs the same level of detail. Structure your dashboards around decision-making needs, not organizational hierarchy.
Executive dashboard (CMO, CEO)
Marketing’s bottom-line impact. Total attributed revenue, revenue by channel at a high level, overall ROI, customer acquisition cost. Weekly or monthly cadence.
No model selection. Show a blended view or the company’s agreed-upon primary model. Executives need directional answers, not methodological choices. If you present five models to a CMO, they’ll ask “which one is right?” and the answer — “none of them, the range is the insight” — requires context that doesn’t fit on an executive dashboard.
Pre-select the view. Add a single callout for channels where models disagree significantly, framed as “areas of uncertainty” rather than “we don’t know.”
Manager dashboard
Channel performance by model, campaign-level breakdowns, budget allocation recommendations. Weekly cadence.
Include the model selector so managers can stress-test their channel’s performance under different assumptions. A paid media manager should be able to switch from last-touch (which probably makes their channel look good) to first-touch (which probably makes it look worse) and understand the range.
Add trend lines. A channel’s attributed revenue shifting between models over time reveals whether the channel’s role in the journey is changing — moving from awareness (first-touch credit) to conversion (last-touch credit) or vice versa.
Analyst dashboard
Granular touchpoint analysis, path exploration, model comparison details, conversion lag analysis. Daily access.
Full access to all filters and dimensions for deep investigation. This is where the detailed comparison table (not the pre-aggregated summary) should be accessible. Analysts need the ability to look at individual conversion paths when something unexpected appears in aggregate views.
Looker Studio implementation
Looker Studio is the path of least resistance for BigQuery shops — it’s free and connects natively. The implementation leverages the comparison model directly.
Data connection. Connect to the mrt__attribution__comparison_summary table in BigQuery. The pre-aggregated summary model gives better performance than the detailed table for most views.
Model selector. Create a dropdown control bound to the model_type field and add it to every page. This gives users one-click switching between models. The dropdown automatically picks up new models when you add them to the comparison union — no dashboard changes needed.
Comparison pivot. For the model comparison view, use a pivot table with channel as rows, model_type as columns, and SUM(touchpoint__attributed_revenue) as the metric. This creates a side-by-side comparison without complex chart configuration.
Date range control. Add a date range control bound to conversion_date. Attribution shifts over time as your channel mix changes, so the date range matters more than in many other dashboards.
Working around Looker Studio limitations
Looker Studio is free and integrates well with BigQuery, but it has constraints that affect attribution dashboards specifically.
5,000 row visualization limit. Charts can only display 5,000 rows. For touchpoint-level data, this limit hits quickly. The pre-aggregated summary model in the comparison pattern solves this — aggregate to day/channel level in dbt rather than relying on Looker Studio to aggregate on render.
No native Sankey diagrams. Path visualization is a common request for attribution dashboards, but Looker Studio doesn’t support Sankey charts. Workarounds:
- Third-party community visualization plugins (quality varies)
- A path analysis table showing the top N channel sequences per conversion
- Move path visualization to a separate tool if it’s a critical requirement
Limited drill-through. You can’t easily click a channel and drill into its campaigns, then into individual conversions. Work around this with linked dashboards: add a filter parameter that passes the selected channel to a detail page via URL parameters.
No version control. Dashboard changes aren’t tracked. Document changes manually, or maintain a changelog alongside your dbt project. This is a governance gap — the data layer has full Git history through dbt, but the visualization layer has none.
When to consider alternatives
Looker Studio’s limitations become painful at scale. Consider alternatives when:
- You need path visualization (Sankey diagrams) as a core feature
- You need drill-through across multiple levels of detail
- Dashboard governance (version control, review workflows) is a requirement
- Your team exceeds ~20 regular dashboard users and needs differentiated permissions
Tableau (~$70/user/month) offers better path visualization and drill-through. Metabase (free self-hosted) works well if you’re already running it. Power BI ($10-20/user/month) integrates with Microsoft ecosystems. Lightdash reads from dbt YAML directly. Each has tradeoffs; Looker Studio remains the default for small-to-medium BigQuery teams.