ServicesAboutNotesContact Get in touch →
EN FR
Note

iOS 14.5 Signal Loss and Meta Measurement

How Apple's App Tracking Transparency changed Meta ad measurement — IDFA collapse, default attribution window changes, Aggregated Event Measurement, and Conversions API as the response.

Planted
bigqueryanalyticsdata engineeringdata quality

Apple’s App Tracking Transparency (ATT) rollout in April 2021 changed how Meta measures ad performance. The impact has become the baseline for all Meta measurement. This note covers what changed, how Meta’s measurement framework adapted, and the implications for warehouse pipeline design and data interpretation.

What ATT Changed

Before ATT, Meta (and every other advertiser) could use the IDFA (Identifier for Advertisers) to track iPhone users across apps and websites. When someone clicked a Meta ad and later completed a purchase in an app, Meta knew — precisely, deterministically — that the click led to the purchase.

ATT required apps to ask users for explicit permission to track them. The opt-in rate settled around 30-40% globally, meaning IDFA availability dropped to roughly 6% of the iOS population (accounting for the fraction who actively opt in versus the default off state). Most iPhone users became effectively invisible to cross-app tracking.

The direct consequences for Meta measurement:

Attribution windows shrank. Meta’s default attribution windows dropped from 28-day click + 1-day view to 7-day click + 1-day view. The longer windows required the kind of deterministic cross-session tracking that ATT made impossible at scale. A 28-day click window on estimated data is too noisy to be meaningful.

Conversions became estimated. Meta increasingly reports “estimated” conversions based on statistical modeling rather than deterministic event matching. When Meta can’t observe the full conversion path, it uses aggregate patterns from similar users who did opt in to model what likely happened for users who didn’t. These modeled numbers update as more signal arrives, which is one reason why attribution data continues to change for days after the fact.

Reach became less precise. The reach metric shifted from “we tracked these specific users” to “we estimate this many unique users saw your ad.” The change was methodological, not just semantic — estimated reach behaves differently under aggregation than deterministic reach did.

Aggregated Event Measurement

Meta’s response was Aggregated Event Measurement (AEM), a framework for measuring website conversions from iOS users using privacy-preserving aggregation. The key features:

  • Conversions are measured in aggregate, not at the individual user level
  • The data is delayed by up to 72 hours before appearing in the API (this is deliberate — it’s part of the privacy architecture)
  • Historically, there was an 8-event limit per domain (8 prioritized conversion events). This limit was removed in June 2025 — configuration is now automatic
  • AEM conversions still take up to 72 hours to appear in the Insights API, even after the June 2025 simplification

For pipeline design, the 72-hour delay on AEM conversions is the critical constraint. Any lookback window shorter than 3 days will miss AEM conversions entirely. The 28-day lookback recommended for Meta pipelines covers AEM delay with considerable margin.

Conversions API as the Response

The primary industry response to signal loss is the Conversions API (CAPI): sending conversion events directly from your server to Meta, bypassing browser-side tracking entirely. CAPI events aren’t affected by ad blockers, browser restrictions, or iOS privacy changes because they originate from your server, not the user’s device.

The reported impact is real. Teams implementing CAPI consistently see 15-25% more attributed conversions within the first quarter of implementation compared to Pixel-only measurement. The lift comes from events that were previously lost to browser restrictions now reaching Meta.

But CAPI introduces its own complexity: deduplication. If you’re sending events through both the Pixel (browser-side) and CAPI (server-side), Meta receives the same conversion from two sources. Without deduplication, both get counted separately, inflating your conversion numbers.

Deduplication requires matching event_id values across both signals:

// Browser-side Pixel call
fbq('track', 'Purchase', {
value: 49.99,
currency: 'USD'
}, {
eventID: 'order-123-abc' // must match CAPI call
});
# Server-side CAPI call
event = {
'event_name': 'Purchase',
'event_time': int(time.time()),
'event_id': 'order-123-abc', # must match Pixel call
'user_data': {...},
'custom_data': {'value': 49.99, 'currency': 'USD'}
}

Meta matches events with the same event_id within a 48-hour window and deduplicates them. When working correctly, Events Manager shows “1 event from 2 sources” rather than two separate conversions. See Meta CAPI Server-Side Setup: Deduplication and Event Match Quality for the full implementation details.

For warehouse pipelines, deduplication affects which numbers you trust. If your Pixel and CAPI are sending matching events with proper event_id matching, Meta’s API returns deduplicated conversion counts. If deduplication isn’t set up, you’ll see inflated conversions in the raw API data. Validate this by checking whether total conversions dropped after implementing CAPI + deduplication — a drop is correct and expected.

What This Means for Your Warehouse Numbers

Several implications for pipeline design and data interpretation:

Model numbers are normal. Post-ATT Meta conversion data is a mix of deterministic (users who opted in) and modeled (users who didn’t). Both are included in the numbers Meta reports. This is documented behavior, not a bug. Accept it and document it.

Recent data is provisional. Modeled conversions update as more signal arrives. Yesterday’s purchase count will be different tomorrow. This compounds the lookback window requirement — you’re not just waiting for late-arriving events, you’re waiting for models to stabilize.

Reach can’t be summed. Estimated reach for an audience segment can’t be added across days or breakdowns without double-counting. If you request daily reach data and sum it, you’ll get a number higher than actual unique viewers because the same person can be estimated as reached on multiple days. Store reach metrics carefully and never sum them in aggregations.

Conversion rate benchmarks shifted. Any benchmark for Meta conversion rates or ROAS that predates April 2021 is using a different methodology. Historical comparisons crossing ATT rollout need to account for the measurement methodology change, not just campaign performance changes.

Signal loss creates systematic bias. The users who opted out of tracking skew toward privacy-conscious, often higher-income demographics. This means Meta’s modeled data may systematically under-represent some user segments. For advertisers targeting these segments, the undercount may be more pronounced than the average 6% IDFA availability figure suggests.

The 2026 State of Things

ATT’s impact has plateaued but hasn’t reversed. Apple has shown no signs of loosening ATT requirements. Meta’s statistical modeling has improved — the estimated conversions are more accurate than they were in 2021 — but deterministic cross-app tracking at scale is gone permanently.

The practical response for 2026:

  1. Implement CAPI if you haven’t — the 15-25% conversion recovery is real
  2. Set 28-day lookback windows in your extraction pipeline to catch AEM delays
  3. Specify attribution windows explicitly in API calls and document which windows you’re using
  4. Accept and document variance between warehouse and Ads Manager, especially for iOS-heavy audiences
  5. Use Conversions Value (revenue) as your primary ROAS signal rather than conversion counts — revenue is less affected by modeling uncertainty than event counts

The era of pixel-perfect Meta attribution is over. The pipelines that work well in this environment are the ones designed around the new reality rather than trying to recreate the old one.