Artificial Intelligence 33 min read

Multi-Objective Modeling for CRM Opportunity Smart Allocation: Iterative Deep Learning Solutions

This article describes the evolution of a multi‑objective deep‑learning framework for automatically assigning CRM opportunities to salespeople, detailing five model versions—from an XGBoost baseline with sample weighting to advanced PLE‑based architectures—while reporting offline and online performance gains in both call‑out and connection‑out conversion rates.

DataFunTalk
DataFunTalk
DataFunTalk
Multi-Objective Modeling for CRM Opportunity Smart Allocation: Iterative Deep Learning Solutions

In September 2020, the AI Lab, Marketing Platform (CRM), and LBG Yellow Pages jointly launched a smart opportunity allocation project, abstracting the CRM opportunity distribution process as a recommendation/search problem and applying traditional machine learning and deep learning algorithms to assign suitable leads to sales staff, thereby improving conversion rates and revenue.

Background

The CRM system is a critical tool for 58.com sales, converting raw leads into qualified opportunities and ultimately orders. Opportunities are divided into new and existing pools, with the latter being abundant but of lower quality, prompting the need for AI‑driven selection to boost conversion.

Business Scenario

The "Michigan mode" splits sales teams into an opportunity group that pre‑filters leads for potential conversion and a sales group that follows up on the filtered leads. The project focuses on enhancing the opportunity group's "transfer out" efficiency using multi‑objective modeling.

Why Multi‑Task Deep Learning?

Simultaneously optimizing call‑out rate and connection‑out rate requires a multi‑objective approach; single‑task models cannot evaluate both metrics in real time. Multi‑task learning shares information across tasks, captures task correlations, reduces model size, and allows flexible customization.

Iterative Model Development

Version 1: XGBoost with sample weighting to balance positive and negative samples for both tasks, achieving modest offline AUC improvements and online gains of 26.18% in call‑out rate and 12.17% in connection‑out rate.

Version 2: Introduced MM‑oE (Multi‑Gate Mixture‑of‑Experts) to share expert networks while allowing task‑specific gating, yielding 1.95% and 0.11% offline AUC improvements and online gains of 7.85% and 8.26% respectively.

Version 3: Added ESMM (Entire Space Multi‑Task Model) to model the mathematical relationship CT‑CVR = CTR × CVR, addressing sample‑space bias and data‑sparsity issues, resulting in online improvements of 5.31% (call‑out) and 11.03% (connection‑out).

Version 4: Replaced MM‑oE with PLE (Progressive Layered Extraction), incorporated auxiliary tasks (e.g., call duration), feature‑specific experts, and attention‑based gating, achieving 6.53% and 3.96% online gains.

Version 5: Modeled bias in ranking features and introduced a two‑stage ranking model that combines multi‑task probabilities with side information, delivering online lifts of 6.78% (call‑out) and 11.75% (connection‑out).

Summary and Outlook

The multi‑objective approach consistently outperformed single‑objective baselines across five iterations, demonstrating the value of deep‑learning architectures, bias mitigation, and sophisticated ranking strategies. Future work will explore richer multimodal features, Pareto‑optimal training methods, and deeper business‑specific adaptations.

model optimizationDeep LearningA/B testingmulti-task learningRecommendation systemsCRM
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.