Three downstream patterns become practical once a unified cross-platform ad model is running: blended ROAS, budget pacing, and Marketing Mix Modeling data preparation. Each addresses a question that requires centralized cross-platform data.
Blended ROAS
Blended ROAS is total revenue divided by total ad spend across all channels. It’s the most honest measure of marketing efficiency you can build, and it sidesteps the attribution disagreement entirely.
After iOS 14, per-channel attribution accuracy became unreliable. Meta’s conversion numbers dropped. Google’s didn’t change much. LinkedIn continued with its 90-day window. Arguing about which platform “really” drove which conversion became increasingly unproductive. Blended ROAS takes a different approach: instead of trying to assign credit per channel, look at the overall ratio and optimize accordingly.
-- mrt__marketing__blended_roas.sqlWITH daily_spend AS ( SELECT date_day, SUM(campaign__spend) AS total_spend FROM {{ ref('mrt__marketing__campaign_report') }} GROUP BY 1),
daily_revenue AS ( SELECT DATE(order__created_at) AS date_day, SUM(order__revenue) AS total_revenue FROM {{ ref('int__order__enriched') }} WHERE order__is_completed = TRUE GROUP BY 1)
SELECT ds.date_day, ds.total_spend, COALESCE(dr.total_revenue, 0) AS total_revenue, SAFE_DIVIDE(dr.total_revenue, ds.total_spend) AS blended_roas, -- Rolling 7-day and 30-day ROAS for trend analysis SAFE_DIVIDE( SUM(dr.total_revenue) OVER (ORDER BY ds.date_day ROWS BETWEEN 6 PRECEDING AND CURRENT ROW), SUM(ds.total_spend) OVER (ORDER BY ds.date_day ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) ) AS blended_roas_7d, SAFE_DIVIDE( SUM(dr.total_revenue) OVER (ORDER BY ds.date_day ROWS BETWEEN 29 PRECEDING AND CURRENT ROW), SUM(ds.total_spend) OVER (ORDER BY ds.date_day ROWS BETWEEN 29 PRECEDING AND CURRENT ROW) ) AS blended_roas_30dFROM daily_spend dsLEFT JOIN daily_revenue dr USING (date_day)ORDER BY ds.date_dayThe rolling 7-day and 30-day ROAS smooth out daily volatility and give the marketing team a meaningful trend. A single bad day doesn’t signal a problem. A 30-day downtrend does.
One important principle: platform-specific attribution data should never be compared directly. Each platform applies its own attribution logic and will always over-claim. Blended metrics and UTM-based attribution are more reliable than trying to reconcile what each platform says it drove.
Budget Pacing
Budget pacing compares month-to-date spend against monthly budgets with daily burn rate projections. When spend is running 20% ahead of pace, the media team can throttle campaigns before month-end rather than discovering the overspend in the next invoice.
Build it as a dbt model joining unified spend with a budget seed file:
platform,campaign_group,month,budget_amountgoogle_ads,brand,2026-03-01,25000google_ads,performance,2026-03-01,50000facebook_ads,prospecting,2026-03-01,30000facebook_ads,retargeting,2026-03-01,15000linkedin_ads,brand-awareness,2026-03-01,10000-- mrt__marketing__budget_pacing.sqlWITH monthly_spend AS ( SELECT DATE_TRUNC(date_day, MONTH) AS month_start, platform, SUM(campaign__spend) AS mtd_spend, COUNT(DISTINCT date_day) AS days_elapsed FROM {{ ref('mrt__marketing__campaign_report') }} WHERE DATE_TRUNC(date_day, MONTH) = DATE_TRUNC(CURRENT_DATE(), MONTH) GROUP BY 1, 2),
budgets AS ( SELECT CAST(month AS DATE) AS month_start, platform, SUM(budget_amount) AS monthly_budget FROM {{ ref('monthly_ad_budgets') }} GROUP BY 1, 2),
days_in_month AS ( SELECT DATE_DIFF( DATE_ADD(DATE_TRUNC(CURRENT_DATE(), MONTH), INTERVAL 1 MONTH), DATE_TRUNC(CURRENT_DATE(), MONTH), DAY ) AS total_days)
SELECT ms.month_start, ms.platform, ms.mtd_spend, b.monthly_budget, ms.days_elapsed, dim.total_days, -- Pacing metrics SAFE_DIVIDE(ms.mtd_spend, b.monthly_budget) AS budget_utilization_pct, SAFE_DIVIDE(ms.mtd_spend, ms.days_elapsed) AS daily_burn_rate, SAFE_DIVIDE(b.monthly_budget, dim.total_days) AS target_daily_rate, -- Projected month-end spend at current burn rate SAFE_DIVIDE(ms.mtd_spend, ms.days_elapsed) * dim.total_days AS projected_month_end_spend, -- Over/under pacing flag CASE WHEN SAFE_DIVIDE(ms.mtd_spend, ms.days_elapsed) * dim.total_days > b.monthly_budget * 1.10 THEN 'over_pacing' WHEN SAFE_DIVIDE(ms.mtd_spend, ms.days_elapsed) * dim.total_days < b.monthly_budget * 0.90 THEN 'under_pacing' ELSE 'on_track' END AS pacing_statusFROM monthly_spend msLEFT JOIN budgets b ON ms.month_start = b.month_start AND ms.platform = b.platformCROSS JOIN days_in_month dimThe pacing_status flag is the key output. Wire it to a Slack alert or dashboard filter so the media buying team sees over_pacing immediately. The 10% threshold for over/under-pacing is a starting point — tighten it for platforms with larger budgets where overruns are more costly.
The budget seed file requires manual maintenance. Some teams automate this by pulling budget data from a spreadsheet via a Google Sheets connector or from a planning tool’s API. The important thing is that the budget data lives alongside the spend data in the warehouse, where the comparison can be automated.
Marketing Mix Modeling Data Preparation
Marketing Mix Modeling (MMM) is the statistical approach to measuring marketing effectiveness at the channel level, accounting for external factors that attribution models miss. Tools like Meta’s Robyn and PyMC-Marketing have made MMM accessible to smaller teams, but they require daily-grain data in a specific structure.
Your unified ad model already produces most of what MMM needs:
| Required Field | Source |
|---|---|
| Date | date_day from unified model |
| Channel | platform from unified model |
| Spend | campaign__spend aggregated to daily × channel |
| Revenue | Total daily revenue from CRM/ecommerce |
| External factors | Seasonality, holidays, competitor activity |
The gap between what the unified model provides and what MMM tools require is the “untracked” revenue — the difference between total daily revenue and the sum of tracked revenue per channel. This gap is the MMM model’s opportunity: it represents the effect of brand awareness, organic search, word-of-mouth, and other unmeasured channels.
-- mrt__marketing__mmm_input.sqlWITH daily_channel_spend AS ( SELECT date_day, platform AS channel, SUM(campaign__spend) AS spend, SUM(campaign__impressions) AS impressions, SUM(campaign__clicks) AS clicks FROM {{ ref('mrt__marketing__campaign_report') }} GROUP BY 1, 2),
daily_revenue AS ( SELECT DATE(order__created_at) AS date_day, SUM(order__revenue) AS total_revenue FROM {{ ref('int__order__enriched') }} WHERE order__is_completed = TRUE GROUP BY 1),
tracked_revenue AS ( SELECT date_day, SUM(attributed_revenue) AS tracked_revenue FROM {{ ref('mrt__attribution__channel_daily') }} GROUP BY 1)
SELECT dcs.date_day, dcs.channel, dcs.spend, dcs.impressions, dcs.clicks, dr.total_revenue, tr.tracked_revenue, dr.total_revenue - COALESCE(tr.tracked_revenue, 0) AS untracked_revenueFROM daily_channel_spend dcsLEFT JOIN daily_revenue dr ON dcs.date_day = dr.date_dayLEFT JOIN tracked_revenue tr ON dcs.date_day = tr.date_dayThe untracked_revenue column becomes a critical input for MMM. If it’s consistently 40% of total revenue, that tells you nearly half of your revenue comes from sources that per-channel attribution can’t measure — brand equity, organic growth, or channels you’re not tracking. MMM attempts to decompose that gap into its component drivers.
Relationship to the Unified Model
Without the unified model, each of these analyses requires manual data gathering across multiple platform dashboards. With it, they are dbt models that run automatically and stay current.
The foundation consists of five comparable metrics, standardized campaign names, and spend reconciliation tests.