The adAnalytics endpoint is where extraction logic lives. It has several structural constraints that require specific engineering decisions in request design, error handling, and incremental load strategy.
No Pagination
The most unusual characteristic of the adAnalytics endpoint: it has no pagination. Every other significant API endpoint in the ad platform world — Google’s SearchStream, Meta’s Insights API — uses pagination to handle large result sets. LinkedIn’s adAnalytics returns all results in a single response, capped at 15,000 elements.
If your query would return more than 15,000 rows, the API truncates the response without warning you that truncation occurred. You don’t get an error. You just get 15,000 rows instead of 25,000, and unless you’re counting rows against what you expect, you won’t know.
This means you have to manage the response size yourself by controlling the scope of each request. The standard pattern is to slice by date range — request one week at a time rather than one month, or one day at a time for accounts with many campaigns. The right slice size depends on your account’s scale: the number of campaigns multiplied by the number of days in your range should stay well under 15,000 with a margin.
For large accounts running hundreds of active campaigns with demographic pivot breakdowns (which multiply row counts), you may need to request a single day at a time. Build the date-slicing logic into your extraction code from the start rather than discovering the truncation problem when your pipeline has been running for six months and you notice campaign counts don’t match the LinkedIn UI.
The 20-Metric Limit
Each adAnalytics request can include a maximum of 20 metrics. LinkedIn’s API offers more than 20 metrics — core performance metrics, social action metrics, viral metrics, video engagement metrics, and conversion metrics. If you want them all, you need multiple requests per time period.
The join key is the combination of the creative or campaign identifier and the date. For a single campaign on a single day, you’d make two requests (metrics 1-20 and metrics 21+) and join on (campaign_id, date_day) in your transformation layer.
In practice, this means deciding which metrics you actually need rather than collecting everything. Most teams don’t need viral share-of-engagement metrics alongside standard click and spend data. Be deliberate about which metrics go into each request, document the decision, and build your joins explicitly.
Query Tunneling for Long URLs
LinkedIn’s API uses query parameters to specify the campaigns, creatives, and metrics you want to retrieve. For accounts with many campaigns, the list of campaign IDs in the query parameters can exceed URL length limits — typically around 2,000-4,000 characters depending on the HTTP layer. When you hit the limit, you’ll get a 414 URI Too Long error.
The workaround is “query tunneling”: instead of a GET request with parameters in the URL, you send a POST request with the parameters in the request body. LinkedIn’s API supports this pattern — you send the same parameters, just in the body of a POST rather than appended to the URL.
This is non-obvious the first time you hit it. The error code is clear enough (414), but the solution isn’t documented as prominently as it should be. If you’re building a custom extractor, add query tunneling support from the start for any endpoint that accepts long parameter lists. If you’re using a managed tool, verify it handles query tunneling correctly — some older implementations only used GET requests.
Cursor Pagination for Entity Endpoints
The analytics endpoint has no pagination, but the entity endpoints (accounts, campaign groups, campaigns, creatives) use cursor-based pagination. In January 2024, LinkedIn migrated these entity endpoints from index-based pagination to cursor-based pagination.
Index-based pagination lets you request “page 3 of results” directly by specifying an offset. Cursor-based pagination gives you an opaque cursor token with each response that you pass to get the next page — there’s no way to jump to a specific page. This is actually a more reliable approach (less prone to missing records when data changes mid-pagination), but it’s a breaking change for any custom pipeline code written before January 2024.
If you’re working with code written before that migration, the entity endpoints need to be updated. The analytics endpoint wasn’t affected — it never had pagination to migrate.
Rate Limits (That LinkedIn Won’t Tell You About)
LinkedIn’s rate limits have two types: application-level (total calls across all users of your app) and member-level (calls tied to a specific OAuth token). Both reset at midnight UTC.
LinkedIn doesn’t publish the specific limit numbers. You learn your limits by hitting them, or by watching the Developer Portal’s Analytics tab, which shows your quota usage. There’s no real-time programmatic way to check how close you are to a limit before you hit it.
Email alerts arrive at 75% of quota — with a 1 to 2 hour delay. By the time you get the email, you may have already hit the wall. Build exponential backoff with jitter into your extraction code, and treat 429 responses as expected rather than exceptional.
For high-volume accounts running multiple concurrent extractions, the unpublished rate limits make capacity planning difficult. The practical approach: start conservative, monitor the Developer Portal’s quota dashboard regularly for the first few weeks, and back off if you see quota consumption approaching limits.
Monthly API Versioning
LinkedIn uses monthly API versioning with header-based specification. You include a Linkedin-Version: YYYYMM header with each request to specify which API version you want. Each version is supported for at least one year from its release.
The base URL is https://api.linkedin.com/rest/. The legacy /v2/ endpoint was fully deprecated in June 2023 — any documentation or code referencing the v2 path needs updating.
The practical implication of monthly versioning: LinkedIn can ship breaking changes every month, and you need to track the changelog to know when something you’re using changes. In practice, LinkedIn doesn’t ship breaking changes every month, but the version header is your protection — you opt into new behavior by updating your version header rather than having it forced on you.
One notable breaking change to track: Sponsored Messaging metrics changed in July 2024. Impressions and clicks are no longer reported for Sponsored Messaging campaigns. They’re replaced by sends and opens. If you have Sponsored Messaging campaigns and your extraction code requests the old metrics, you’ll now get zeros for impressions and clicks. Update your metric configuration and documentation to use the new fields.
The Demographic Data Complications
Professional demographic pivots (MEMBER_COMPANY, MEMBER_JOB_TITLE, MEMBER_SENIORITY, etc.) are available through adAnalytics but come with additional constraints beyond the standard ones:
- Values are approximate and privacy-protected — LinkedIn applies noise to small numbers
- Only the top 100 values per creative per day are returned
- A minimum threshold of 3 events is required before a value appears at all
- Data retention for demographic breakdowns is limited to 2 years (versus 10 years for standard performance data)
Daily granularity is only available for 6 months. After that, LinkedIn auto-rounds to monthly aggregation. For pipeline design: if you want daily demographic data, you must capture it within the 6-month window. Historical backfills beyond that will give you monthly-grain data at best.
These constraints mean demographic data requires a different pipeline strategy than standard performance metrics. Keep demographic extractions separate from performance extractions, with explicit documentation of what the values represent and why they might not sum exactly to totals. See LinkedIn Ads B2B Data Value for how to interpret and use this data productively.