Artificial Intelligence 15 min read

Exploring Generalized Multi‑Objective Recommendation Algorithms for 58 Community

This article details how 58 Community evolved its recommendation system from single‑objective click‑rate optimization to a multi‑objective framework that boosts value‑content share, improves user retention, and leverages cross‑domain embeddings and online CEM‑based parameter tuning to achieve significant performance gains.

DataFunSummit
DataFunSummit
DataFunSummit
Exploring Generalized Multi‑Objective Recommendation Algorithms for 58 Community

Guest speaker Zhou Jianbin, senior algorithm architect at 58.com, shares the business background of 58 Community, a content feed platform serving local users with PGC and UGC.

The platform aims to increase the proportion of value‑content (real‑estate, cars, jobs, local life) while maintaining click‑through rate, retention and interaction metrics.

Initially the goal was single‑objective click‑rate optimization; later it evolved to multi‑objective targets such as value‑content share, stable click‑rate, and user retention, requiring multi‑objective ranking models (e.g., shared‑bottom ESSM, MM‑OE).

To boost value‑content share, the team moved from rule‑based boosting to improving the ranking model by incorporating cross‑domain user behavior from other 58 services, treating them as auxiliary features.

The first attempt modified the DeepFM+DIN model by simplifying DIN features and replacing concatenation with pooling for heterogeneous embeddings, but suffered from slow training and negligible impact due to sparse and mismatched cross‑domain items.

They then adopted a cross‑domain embedding pre‑training approach inspired by Alibaba’s EGES, extracting core attributes from each business line, compressing IDs, and constructing a weighted user‑behavior graph for DeepWalk‑style random walks, yielding joint item and attribute embeddings.

Replacing the original DIN embeddings with EGES‑pre‑trained vectors improved both click‑through rate and value‑content proportion (from 12% to 28%).

For the long‑term retention goal, they identified four key features: interaction rate, first‑visit content weight, last‑visit content weight, and diversity. A re‑ranking formula combines click‑prediction with weighted contributions of these features, and a diversity adjustment factor θ is applied.

Parameter optimization is performed online using the Cross‑Entropy Method (CEM), sampling parameter sets, evaluating rewards (weighted retention, click‑rate, interaction), selecting top‑k, and iterating, allowing continuous adaptation without exhaustive A/B tests.

After about ten CEM iterations, next‑day retention improved by 1% while click‑rate and interaction remained stable, confirming the effectiveness of the multi‑objective and online optimization framework.

Future work includes exploring reinforcement learning for multi‑objective recommendation to further enhance long‑term ecosystem health.

user retentionrecommendationembeddingonline optimizationCross-Domainmulti-objectiveCEM
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.