Privacy vs. Personalization in Advertising: Technical Foundations and Emerging Solutions
The article examines how increasing privacy regulations—especially after iOS 14—challenge personalized advertising by focusing on the technical core of user identification, the risks of cross‑domain data sharing, and a range of mitigation strategies such as fingerprint protection, third‑party cookie blocking, and privacy‑preserving attribution frameworks like PCM, SKAdNetwork, AEM, and FLoC.
Overview
Personalized advertising relies on extensive user data collection, while privacy protection aims to limit that data, creating a tension between the two goals. The article outlines the relationship between personalization and privacy, the fundamental problem of uniquely identifying users (User ID), and the technical challenges this poses.
Fundamental Issue: User ID
Both web and app environments need a stable User ID to track users across sessions. In the PC era, fingerprinting libraries like fingerprintjs combine many device attributes to generate low‑collision identifiers. In the mobile era, platforms provide built‑in identifiers (Android ID, IDFA, UDID) that are increasingly restricted.
Privacy Protection Strategies
Web
Make User ID harder to obtain by adding noise to fingerprinting techniques (e.g., Canvas noise).
Prohibit cross‑domain data sharing by blocking third‑party cookies; browsers such as Safari, Firefox, Chrome, Tor, and Brave have implemented or are rolling out stricter cookie policies.
App
Apple’s App Tracking Transparency (ATT) framework restricts access to IDFA, requiring user consent.
Android’s policies are currently less strict, but similar restrictions are anticipated.
Regulatory Landscape
Legislation such as the GDPR and CCPA mandates user consent, data usage transparency, and the right to opt‑out of tracking, influencing how advertisers must handle user data.
Technical Solutions for Attribution
Private Click Measurement (PCM)
PCM, introduced by Apple, adds two attributes to <a> tags ( attributionsourceid and attributionon ) to enable privacy‑preserving click‑through attribution without exposing User ID. It limits source IDs to 8 bits, uses eTLD+1 domains, aggregates events, and delays reporting (24‑48 hours).
SKAdNetwork (SKAN)
SKAN provides a privacy‑first attribution model for app‑to‑app installs, using limited fields such as a 1‑100 campaign ID and a 6‑bit conversion value, with delayed reporting and cryptographic signatures.
Aggregated Event Measurement (AEM)
Facebook’s AEM builds on PCM principles for App‑to‑Web flows, reducing trigger‑data bits to 3, extending delay to 72 hours, and keeping the entire attribution process within the app.
Federated Learning of Cohorts (FLoC)
Google’s FLoC groups users with similar browsing histories into cohorts, enabling interest‑based ad targeting without exposing individual identifiers. It operates locally in the browser, adds optional noise, and respects opt‑out via the Permissions-Policy: interest-cohort=() header.
First‑Party Landing Pages
Another mitigation approach is to keep all user interactions on first‑party domains, eliminating cross‑domain data sharing and simplifying compliance.
Conclusion
Balancing privacy and personalization requires a combination of technical safeguards (fingerprint hardening, cookie restrictions, privacy‑preserving attribution) and compliance with evolving regulations, while industry players continue to develop and refine solutions.
ByteFE
Cutting‑edge tech, article sharing, and practical insights from the ByteDance frontend team.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.