DTS delivers raw Google Ads tables. The dbt layer handles the transformations needed to produce usable reporting: click-type filtering, micros conversion, attribute/stats JOINs, and incremental strategy configuration matched to DTS’s partition-replacement behavior.
The Starting Point: Fivetran’s dbt Package
Even if you’re loading via DTS rather than Fivetran, Fivetran’s dbt_google_ads package is worth understanding. It produces 29 materialized models covering account, campaign, ad group, keyword, ad, URL, and search report levels. The package’s source models can be adapted to point at DTS tables with minor schema mapping — the transformation logic is sound even if the source assumptions need adjustment.
For teams not on Fivetran, the package serves as a reference implementation: here’s what a production-grade Google Ads dbt project looks like, here’s which tables to join to which, and here’s which metrics to trust.
If you’re already using Fivetran for other connectors and considering switching Google Ads to DTS to save money, run the numbers first. Fivetran’s package eliminates a lot of the work described below.
Source Configuration
Start by declaring DTS tables as dbt sources. You’ll reference the p_ prefixed tables — the partitioned versions that enable efficient filtering. If your Customer ID changes or you add MCC accounts, updating the source YAML is the only change needed:
version: 2
sources: - name: google_ads database: "{{ env_var('GCP_PROJECT') }}" schema: google_ads_CUSTOMERID # Replace with your actual dataset tables: - name: p_CampaignBasicStats_CUSTOMERID identifier: p_CampaignBasicStats_CUSTOMERID - name: p_Campaign_CUSTOMERID identifier: p_Campaign_CUSTOMERID - name: p_AdGroupBasicStats_CUSTOMERID identifier: p_AdGroupBasicStats_CUSTOMERID - name: p_AdGroup_CUSTOMERID identifier: p_AdGroup_CUSTOMERIDThe dataset name includes the Customer ID by default (google_ads_1234567890). For MCC setups where multiple accounts land in the same dataset, the Customer ID is embedded in the data itself rather than the table name.
The Incremental Strategy
DTS refreshes data by replacing entire date partitions on each run. When DTS runs for today, it overwrites today’s partitions in all tables. This behavior has a direct implication for how you configure dbt incremental models on top of DTS data: you need a strategy that respects partition replacement rather than appending new rows.
On BigQuery, insert_overwrite is the right choice:
{{ config( materialized='incremental', partition_by={"field": "_DATA_DATE", "data_type": "date"}, incremental_strategy='insert_overwrite') }}This replaces the dbt model’s partitions on each run, matching DTS’s behavior. If DTS updated the last 7 days of data, your dbt model replaces the same 7 days. No duplicates, no stale data accumulation.
The alternative — merge with a unique_key — works but is less efficient for this pattern. DTS replaces partitions wholesale; a merge would do row-level comparison across potentially millions of rows when partition replacement is all you actually need.
The Conversion Lookback Window
Google Ads conversions arrive late. A click happens today; the conversion from that click might be attributed back 7, 14, or 30 days later as the conversion window closes. DTS handles this by including a configurable refresh window (see Google Ads BigQuery DTS Setup), but your dbt models need to match that behavior to avoid silently stale conversion numbers.
If your DTS refresh window is 30 days and your dbt incremental model only processes the last 7 days, you’ll miss conversion updates that DTS made to older data. The fix is to match your dbt lookback to your DTS refresh window:
{% if is_incremental() %}WHERE _DATA_DATE >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY){% endif %}This means each dbt run reprocesses the last 30 days of data. On BigQuery with insert_overwrite, replaced partitions don’t create duplicates — BigQuery swaps the partition atomically. The trade-off is query cost versus data accuracy. For advertising data where conversion accuracy directly affects optimization decisions, accuracy wins.
The 30-day window is the right default for most accounts. Google Ads conversions can technically update retroactively for up to 90 days, but most changes stabilize within 30. If you have longer-funnel conversion events (typical in B2B), extend to 60 or 90 days and document why.
This is a specific application of the general lookback window pattern for incremental models. The Google Ads-specific detail is that your lookback window should match your DTS refresh window exactly — otherwise you’re paying for DTS to update data that your dbt models won’t pick up.
The Full Base Model
Putting it together: a base model that handles the ClickType impression trap, the micros conversion, the JOIN between attribute and stats tables, and the incremental strategy:
-- models/base/google_ads/base__google_ads__campaign_stats.sql{{ config( materialized='incremental', partition_by={"field": "date_day", "data_type": "date"}, incremental_strategy='insert_overwrite') }}
SELECT cs._DATA_DATE AS date_day, c.campaign_name, c.campaign_status, cs.campaign_id, -- Impressions: filter to URL_CLICKS to avoid ClickType duplication SUM(CASE WHEN cs.segments_click_type = 'URL_CLICKS' THEN cs.metrics_impressions ELSE 0 END) AS impressions, -- Clicks: sum across all click types SUM(cs.metrics_clicks) AS clicks, SUM(cs.metrics_interactions) AS interactions, -- Cost: divide by 1,000,000 to convert micros to currency SUM(cs.metrics_cost_micros) / 1000000 AS cost, SUM(cs.metrics_conversions) AS conversions, SUM(cs.metrics_conversions_value) AS conversions_value, SAFE_DIVIDE( SUM(cs.metrics_conversions_value), SUM(cs.metrics_cost_micros) / 1000000 ) AS roasFROM `{{ source('google_ads', 'p_CampaignBasicStats_CUSTOMERID') }}` cs-- Join to attribute table for human-readable namesLEFT JOIN `{{ source('google_ads', 'p_Campaign_CUSTOMERID') }}` c ON cs.campaign_id = c.campaign_id -- _LATEST_DATE filter: get current campaign name, not historical names AND c._DATA_DATE = c._LATEST_DATE{% if is_incremental() %}WHERE cs._DATA_DATE >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY){% endif %}GROUP BY 1, 2, 3, 4A few things in this model worth naming explicitly:
c._DATA_DATE = c._LATEST_DATE— attribute tables are date-partitioned too, with values changing as campaigns get renamed or reconfigured. The_LATEST_DATEpseudo-column filters to the most recent attribute record without requiring you to know today’s date explicitly. Without this, campaigns that were renamed will show multiple rows (one per name).- The
SAFE_DIVIDEfor ROAS — prevents division by zero errors for campaigns with zero spend. - The
GROUP BYon campaign attributes — since you’re joining a dimension table, you need to group by the attributes to avoid fan-out.
Cross-Platform Reporting
If Google Ads is one of several ad platforms you’re bringing into a unified model, the intermediate layer is where platform-specific normalization happens before the UNION. The base model above handles DTS’s specific quirks. An intermediate model normalizes to the shared schema that dbt_ad_reporting or your custom unified model expects:
-- models/intermediate/google_ads/int__google_ads__campaign_report.sqlSELECT date_day, 'google_ads' AS platform, campaign_id, campaign_name, impressions, clicks, cost AS spend, conversions, conversions_valueFROM {{ ref('base__google_ads__campaign_stats') }}This is the model that feeds into your cross-platform UNION mart. By the time data reaches the UNION, every platform looks the same — same column names, same data types, same units. All the DTS-specific handling is encapsulated in the base layer and invisible to downstream consumers.