Artificial Intelligence 8 min read

Multi‑Task Learning for Joint CTR, CVR, and GMV Prediction in E‑commerce

This article describes how a multi‑task learning framework based on ESMM and attention‑shared embeddings was built to jointly predict click‑through rate, conversion rate, and gross merchandise value in a large e‑commerce platform, addressing data sparsity, bias, and training challenges.

DataFunTalk
DataFunTalk
DataFunTalk
Multi‑Task Learning for Joint CTR, CVR, and GMV Prediction in E‑commerce

Background – In e‑commerce, multiple core metrics such as CTR, CVR, GMV, and UV are monitored. While higher CTR often leads to higher GMV, the relationship is not linear, especially for small CTR improvements, and optimizing GMV alone ignores user experience and suffers from data sparsity.

The team needed a solution that could fuse CTR, GMV, and revenue predictions because relying solely on CPC bidding was insufficient for profitability.

Overall Model Strategy

1. Prediction Targets – User behavior follows the sequence impression → click → conversion . The CVR model estimates pCVR = p(conversion | click, impression) , while pCTR and pCTCVR are related as shown in the accompanying diagram.

2. Model Architecture – Samples are drawn with a 1:6 positive‑to‑negative ratio (positive = click). Inspired by Alibaba’s ESMM, the full‑stack user behavior sequence (click, add‑to‑cart, favorite, etc.) is used in a multi‑task setting with two sub‑networks: the left predicts pCVR, the right predicts pCTR. Their outputs are multiplied to obtain pCTCVR, as illustrated in the loss diagram.

Only the CVR sub‑network’s predictions are used for ranking in the experiments.

Interesting Findings

1. Prediction Bias – Directly using CTCVR scores caused large bias, especially over‑estimating low‑order samples. Calibrating the CVR sub‑network proved more effective.

2. Sharing Attention Layers – Beyond sharing embeddings, the attention parameters for behavior sequences were also shared across tasks, yielding noticeable AUC improvements for CTR, CVR, and CTCVR.

3. DUPN‑style Features – Inspired by DUPN, various ID‑based and aggregated features (shop, brand, category, price, sales, etc.) were incorporated to enrich product representations.

4. Training Challenges – Joint training slowed convergence, caused occasional NaNs, and required careful hyper‑parameter tuning and data cleaning; increasing the number of task losses would exacerbate these issues.

5. Implementation Detail – The correct computation for CTCVR logits is sigmoid(ctr_logits) * sigmoid(cvr_logits) . TensorFlow’s sigmoid_cross_entropy_with_logits was used, and embedding dimensions were increased from 32 to 128 with a reduced learning rate to avoid NaNs.

Conclusion – The multi‑task approach outperformed separate CTR and CVR models in production, maintaining CTR while significantly boosting GMV and revenue. Although reinforcement learning was considered for multi‑objective optimization, practical challenges limited its adoption.

e-commercectrrecommendation systemCVRmulti-task learningAttentionESMM
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.