Artificial Intelligence 8 min read

DCCL: A Contrastive Learning Framework for Causal Representation Decoupling in Recommendation Systems

The paper introduces DCCL, a model‑agnostic contrastive learning framework that decouples user interest and conformity representations to address popularity bias and out‑of‑distribution challenges in recommendation systems, demonstrating significant offline and online performance gains on real‑world datasets.

Kuaishou Tech
Kuaishou Tech
Kuaishou Tech
DCCL: A Contrastive Learning Framework for Causal Representation Decoupling in Recommendation Systems

Recommendation systems traditionally rely on observed user‑item interactions, assuming these interactions reflect user interests, but they also stem from conformity behavior where users chase popular items. Ignoring this conformity factor can couple it with interest, leading to suboptimal recommendations and out‑of‑distribution (OOD) issues.

The authors propose DCCL, a causal representation decoupling framework based on contrastive learning. DCCL is model‑agnostic and can be deployed in any online system. It constructs a causal graph that separates user representations into interest and conformity components, and item representations into content and popularity components, allowing both factors to jointly influence interactions.

Two contrastive learning sub‑tasks are introduced: IPCL (Interest‑wise Contrastive Learning) and CPCL (Conformity‑wise Contrastive Learning). Both use BPR loss as the main objective and treat items interacted by the user as positive samples, while sampling negative items from the batch. IPCL emphasizes low‑popularity items by weighting normalized popularity, whereas CPCL filters out items more popular than the target, focusing on conformity.

The overall loss combines the main BPR loss with the IPCL and CPCL losses.

Offline experiments on Yelp and a short‑video dataset compare DCCL against baselines such as CausE, IPS, DICE, PD, and MACR, using MF and LightGCN as backbones. DCCL consistently outperforms all baselines, achieving up to 33.24% improvement in HR@20 over MF and 22.91% over LightGCN, and more than 10% over the strongest baseline MACR.

OOD evaluation defines popular items as the top 20% by popularity and creates test sets with varying popularity distributions. DCCL maintains superior HR@20, showing increasing gains as the test set deviates from the training popularity distribution, indicating strong robustness.

Online A/B testing on Kuaishou’s short‑video recommendation platform demonstrates significant lifts in effective play rate and like rate, with consistent improvements across items of different popularity levels.

In conclusion, DCCL effectively decouples interest and conformity representations, enhancing recommendation accuracy and robustness. Future work will explore multi‑intent decoupling using causal discovery, hypergraph neural networks, and clustering to further improve user experience.

contrastive learningRecommendation systemspopularity biascausal inferenceOOD robustnessuser interest
Kuaishou Tech
Written by

Kuaishou Tech

Official Kuaishou tech account, providing real-time updates on the latest Kuaishou technology practices.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.