How Privacy-Enhancing Technologies Are Revolutionizing Data Use in Digital Advertising
This article reviews the background, core techniques, and typical applications of privacy‑enhancing technologies—including secure multi‑party computation, privacy‑preserving machine learning, differential privacy, and trusted execution environments—highlighting their role in unlocking multi‑party data value while ensuring compliance and privacy protection.
Abstract
Data has become the fifth production factor in the digital era, driving AI, business decisions, and social governance. Releasing the value of multi‑party data fusion while ensuring security, compliance, and privacy is a key challenge, and privacy‑enhancing technologies (PETs) provide crucial support for the healthy development of the digital advertising industry.
1. Background Introduction
1.1 Privacy Dilemma in the Digital Age
In the digital era, personal trajectories are deeply recorded by smartphones, smart homes, and social media, making data a core resource. IDC predicts global data volume will exceed 175 ZB by 2025, with enterprises holding over 60 % of it. While multi‑source data fusion can improve decision efficiency and drive innovation, it also raises severe misuse and leakage risks, as illustrated by incidents such as Verizon’s “super‑cookie” fine and the Cambridge Analytica scandal. Consequently, regulations like GDPR and China’s Personal Information Protection Law impose stricter compliance requirements.
Privacy‑enhancing technologies (PETs) are the key to unlocking data value without compromising privacy.
1.2 What Is Privacy Computing?
Privacy computing, also called privacy‑enhanced or privacy‑preserving computing, enables data analysis without exposing raw data. Its core principle is “data usable but invisible”: only the computation results are revealed, while raw inputs remain encrypted throughout the process.
The mainstream PETs can be divided into four categories:
Secure Multi‑Party Computation (MPC)
Privacy‑Preserving Machine Learning (PPML)
Differential Privacy (DP)
Trusted Execution Environment (TEE)
2. Privacy Computing Technologies
2.1 Secure Multi‑Party Computation
Definition : MPC allows multiple mutually distrustful parties to jointly compute a function without revealing their private inputs, using secure protocols. The abstract model consists of a function f, each party’s input xi, and each party’s output yi, with no party learning others’ inputs or outputs beyond what is inferable from its own data.
Development & Classification : Originating from Yao’s 1982 millionaire problem, MPC has evolved into two main protocol families: general‑purpose MPC (e.g., garbled circuits) and specialized MPC for specific problems (e.g., Private Set Intersection, Private Information Retrieval).
Characteristics : MPC’s biggest advantage is its “zero‑leakage” property—participants can collaborate without sharing raw data or relying on a trusted third party, offering strong theoretical security for scenarios like advertising and joint risk control. However, MPC suffers from high computational and communication overhead and requires complex protocol design and coordinated deployment.
2.2 Privacy‑Preserving Machine Learning
Definition : PPML refers to techniques that protect data privacy throughout the machine‑learning lifecycle, ensuring that model training and inference do not expose individual data.
Development & Classification : PPML can be categorized into three streams:
Federated Learning (FL): Distributed training where raw data stays local and only model updates are exchanged.
Cryptographic PPML: Uses MPC, homomorphic encryption, etc., to secure inference services (MLaaS).
Data‑Perturbation PPML: Injects random noise (e.g., differential privacy, k‑anonymity) into data or model outputs.
Federated Learning
Definition: FL enables multiple participants (devices, hospitals, enterprises) to collaboratively train a global model without sharing raw data, exchanging only gradients or weights.
Characteristics: FL suits scenarios with many participants and high training performance demands, but it relies on data distribution consistency (IID) and model updates may leak sensitive information, requiring additional encryption.
Cryptographic PPML
Definition: Primarily used in Machine Learning as a Service (MLaaS), it combines MPC or homomorphic encryption to protect the inference process.
Characteristics: Offers the highest security but incurs significant computational and communication costs, suitable for high‑sensitivity scenarios such as protecting enterprise model assets.
Data‑Perturbation PPML
Definition: Adds random noise to inputs, outputs, or gradients, using techniques like differential privacy or k‑anonymity to obscure individual records.
Characteristics: Simple to implement with low overhead, but may degrade model accuracy, requiring a trade‑off between privacy and utility.
2.3 Differential Privacy
Definition : Differential privacy adds calibrated noise to query results, guaranteeing that the presence or absence of any single individual has a limited impact on the output, thus preventing re‑identification.
Development & Classification : Introduced by Dwork et al. in 2006, it evolved from centralized DP to local DP and spawned variants such as Laplace, Gaussian, Rényi, and zero‑concentrated DP, supporting a wide range of statistical and machine‑learning applications.
Characteristics : Provides strong mathematical privacy guarantees and broad applicability, with relatively low additional computation and communication overhead. However, the injected noise can reduce data utility, especially for small or high‑dimensional datasets, and DP protects only the output, not the storage or computation phases.
2.4 Trusted Execution Environment
Definition : TEE creates a hardware‑isolated enclave that guarantees code integrity and confidentiality of data and runtime state, preventing external access to the enclave’s memory.
Development & Classification : Originating in 2006 for mobile dual‑system security, ARM TrustZone (2002) became the dominant mobile TEE solution, while Intel SGX (2015) popularized TEE on x86 platforms. Recent CPUs from Huawei, Zhaoxin, Phytium, Kunpeng, and Nvidia also integrate TEE capabilities.
Characteristics : TEE offers high‑performance, hardware‑rooted security for sensitive workloads such as mobile payments and media protection, with low overhead (3‑4× slower than plaintext). Challenges include hardware dependence, limited cross‑platform compatibility, lack of unified standards, and susceptibility to side‑channel attacks.
3. Application Scenarios
Financial Risk Control : MPC + FL enable collaborative credit scoring across banks, enhancing anti‑fraud capabilities.
Healthcare : MPC + TEE support joint drug research while preserving patient privacy.
Digital Advertising : FL + DP allow cross‑platform user profiling, improving targeting precision without exposing individual data.
Government Governance : TEE + DP facilitate inter‑departmental data collaboration (e.g., social security, taxation) while safeguarding citizen information.
4. Conclusion
Privacy‑enhancing technologies are evolving from isolated solutions to integrated ecosystems. The convergence of MPC, PPML, DP, and TEE, together with emerging standards and compliance frameworks, is reshaping data utilization rules, turning data value and privacy protection from opposing forces into a symbiotic relationship. As PETs mature, they are poised to become core infrastructure for the digital economy, driving sustainable growth in advertising, healthcare, finance, and beyond.
5. References
IDC. Global DataSphere: The Rapid Growth of the Global Data Sphere. 2020.
Cramer‑Flood Ethan. Worldwide Ad Spending Forecast 2025. eMarketer, 2025.
CBS News. Verizon to pay $1.35M fine over “super‑cookie” tracking. 2016.
Cadwalladr Carole, Graham‑Harrison Emma. 50 million Facebook profiles harvested for Cambridge Analytica. The Guardian, 2018.
Yao A C. Protocols for secure computations. SFCS, 1982.
Konečný J et al. Federated learning: Strategies for improving communication efficiency. arXiv, 2016.
Dwork, Cynthia. Differential privacy. ICALP, 2006.
Fei S et al. Security vulnerabilities of SGX and countermeasures: A survey. ACM Computing Surveys, 2021.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Alimama Tech
Official Alimama tech channel, showcasing all of Alimama's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
