Artificial Intelligence 33 min read

Multi‑Objective Modeling for CRM Opportunity Allocation: Iterative Deep Learning Approaches

This article details the development and iterative optimization of multi‑task deep learning models—including XGBoost‑based baselines, MMoE, ESMM‑enhanced MMoE, PLE, and bias‑aware ranking—to simultaneously improve call‑out and connect‑out rates in a CRM opportunity distribution system, presenting offline gains and online deployment results for each version.

58 Tech
58 Tech
58 Tech
Multi‑Objective Modeling for CRM Opportunity Allocation: Iterative Deep Learning Approaches

The paper introduces a joint AI Lab, marketing platform, and LBG Yellow Pages project launched in September 2020 to intelligently allocate CRM opportunities by treating the allocation process as a recommendation/search problem and applying traditional machine‑learning and deep‑learning algorithms.

Background : CRM is the core tool for 58.com sales, converting raw leads into qualified opportunities (商机) that are then filtered, called, and potentially turned into orders. Two types of opportunities exist—new and existing—with the latter forming a large, low‑quality pool that sales spend most of their time filtering.

Business Scenario : The "Michigan mode" splits sales into an opportunity‑screening group and a follow‑up group. The goal is to boost the "transfer out" (转出) efficiency of the screening group using AI, thereby improving overall conversion.

Why Multi‑Task Deep Learning? The problem requires simultaneous optimization of two metrics—call‑out rate and connect‑out rate. Single‑task models cannot evaluate or improve both jointly, while multi‑task models can share information, respect task relationships, reduce over‑fitting, and enable end‑to‑end deployment.

Version 1 : A baseline multi‑task model using XGBoost with sample‑weighting to balance positive/negative samples. Offline AUC improvements of +0.31% (call‑out) and +2.72% (connect‑out) were observed, and online gains of +26.18% and +12.17% respectively.

Version 2 : Introduced MMoE to allow task‑specific gating of expert networks, achieving +1.95% (call‑out) and +0.11% (connect‑out) offline, and +7.85% / +8.26% online improvements.

Version 3 : Added the ESMM paradigm to model the mathematical relationship Call‑out = Click‑through × Conversion, addressing sample‑space bias (SSB) and data sparsity (DS). Offline AUC changes were –0.11% (call‑out) and +1.74% (connect‑out); online lifts were +5.31% and +11.03%.

Version 4 : Replaced MMoE with the PLE architecture, incorporated auxiliary tasks (e.g., call duration), feature‑specific experts, and attention‑based gating. Offline AUC gains of +1.97% (call‑out) and +0.44% (connect‑out) were achieved, with online improvements of +6.53% and +3.96%.

Version 5 : Modeled bias in ranking (position, claim count, claim time) and introduced a two‑stage ranking model that fuses multi‑task probabilities with side‑info. Online results showed +6.78% (call‑out) and +11.75% (connect‑out) lifts.

Conclusion & Outlook : Multi‑objective modeling consistently outperforms single‑objective baselines, with each iteration bringing incremental gains. Future work will explore richer multimodal features, Pareto‑optimal multi‑task training, and deeper business‑specific adaptations.

model optimizationrecommendationDeep Learningmulti-task learningCRMopportunity allocation
58 Tech
Written by

58 Tech

Official tech channel of 58, a platform for tech innovation, sharing, and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.