Artificial Intelligence 15 min read

Privacy Risks and Differentially Private Defense for Federated Knowledge Graph Representation Learning

This paper investigates the privacy leakage risks of federated knowledge graph representation learning, designs three membership inference attacks to quantify the threats, and proposes DP‑Flames, a differential‑privacy‑based defense that leverages gradient sparsity to achieve a favorable privacy‑utility trade‑off.

AntTech
AntTech
AntTech
Privacy Risks and Differentially Private Defense for Federated Knowledge Graph Representation Learning

Knowledge graphs are a hot topic in AI, and federated learning enables multiple institutions to collaboratively train models without sharing raw data, but it also introduces privacy risks such as membership inference attacks that can reconstruct training triples.

The authors, from Ant Group’s Security Lab and Zhejiang University, study these risks in the context of federated knowledge graph representation learning and formulate two research questions: (1) what are the privacy risks, and (2) how can they be mitigated.

They design three attacks from the attacker’s perspective: a server‑initiated inference attack (SIA) assuming a semi‑honest server with an auxiliary dataset, a client‑initiated passive attack (CIP) where malicious clients infer triples from other clients, and a client‑initiated active attack (CIA) that breaks the federated protocol to boost inference success.

Experiments on public datasets FB15k‑237 and NELL‑995 show that all three attacks achieve non‑trivial F1‑scores, confirming that current federated knowledge graph training protocols leak knowledge privacy.

To defend against these attacks, the paper proposes DP‑Flames, a differentially private training framework that exploits the sparsity of gradients in knowledge graph models. It uses a two‑stage top‑K private selection algorithm to add Gaussian noise only to non‑zero gradient coordinates, and adapts the privacy budget by emphasizing early training rounds where attacks are most effective.

Evaluation demonstrates that DP‑Flames substantially reduces attack success rates (down to random guessing) while preserving model performance, especially under low‑privacy budgets (ε ≈ 10) and achieving a better privacy‑utility trade‑off compared to naïve DPSGD.

The authors conclude that DP‑Flames provides an effective first step toward privacy‑preserving federated knowledge graph learning and outline future directions such as supporting more embedding models, further improving the trade‑off, and handling heterogeneous client knowledge bases.

privacyKnowledge GraphFederated Learningdifferential privacyDP-Flamesmembership inference attack
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.