Artificial Intelligence 11 min read

Entire Space Delayed Feedback with Cross‑Task Knowledge Distillation (ESDC) for Multi‑Task E‑commerce Recommendation

This article presents Xiaomi’s e‑commerce recommendation research, addressing four key challenges—sample selection bias, data sparsity, delayed feedback, and knowledge inconsistency—by introducing the Entire Space Delayed Feedback with Cross‑Task Knowledge Distillation (ESDC) model, which combines causal inference, cross‑task distillation, twin networks, and uncertainty weighting to improve CVR prediction and achieve a 15% GMV lift over the baseline.

DataFunTalk
DataFunTalk
DataFunTalk
Entire Space Delayed Feedback with Cross‑Task Knowledge Distillation (ESDC) for Multi‑Task E‑commerce Recommendation

Background: In e‑commerce recommendation, the ultimate goal is transaction (GMV) which proceeds through exposure → click → purchase. Modeling conversion rate (CVR) faces four major issues: sample selection bias, data sparsity, delayed feedback, and knowledge inconsistency between CTR and CVR.

To tackle these, the authors propose the Entire Space Delayed Feedback with Cross‑Task Knowledge Distillation (ESDC) model. The architecture consists of a teacher network (based on PLE and enhanced with ESCM² and ESDF ideas) and a student network, linked by a distillation weight‑adjustment module.

Teacher network: Defines random variables X, Y, Z, C, D, E and partitions samples into click‑space and entire‑space sets. It contains four modules: CTR (standard MLP), CVR (adds post‑click behavior features), Delay (discretizes delayed conversion time), and IPS weighting to debias CTR influence on CVR. Loss functions combine CTR loss, CTCVR loss (with ESDF), and IPS loss.

Distillation weight adjustment: Uses uncertainty weighting based on KL divergence between twin CVR networks (dropout‑based) and a common‑frequency loss to down‑weight noisy soft‑labels. The final distillation loss integrates these weights.

Student network: Mirrors the teacher structure but removes unavailable online features and incorporates purchase‑sample weighting in the CTR module. Its loss combines the distilled CVR loss and standard CTR loss.

Results: The ESDC model, when deployed in Xiaomi’s homepage feed recommendation, achieved a 15 % increase in GMV compared with the baseline PLE + ESMM model.

Conclusion: Accurate CVR estimation is crucial for e‑commerce recommendation. The ESDC framework demonstrates how causal inference, cross‑task distillation, twin networks, and uncertainty‑aware weighting can jointly address the four challenges and substantially improve business metrics.

E-commercerecommendationAICVRmulti-task learningKnowledge DistillationDelayed Feedback
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.