Artificial Intelligence 16 min read

Meta-Learning and Cross-Domain Recommendation: Industrial Practices at Tencent TRS

This article presents Tencent TRS's industrial practice of applying meta‑learning and cross‑domain recommendation to address personalization challenges, detailing problem definitions, solution architectures, algorithmic choices such as MAML, deployment strategies, and the cost‑effective outcomes achieved across multiple scenarios.

DataFunSummit
DataFunSummit
DataFunSummit
Meta-Learning and Cross-Domain Recommendation: Industrial Practices at Tencent TRS

The presentation introduces Tencent TRS's industrial case study on meta‑learning and cross‑domain recommendation, divided into two main parts: personalization via meta‑learning and cross‑domain recommendation.

Personalization Pain Points : In recommendation scenarios, data follows a long‑tail distribution where a single model favors large scenes, making it difficult to serve diverse scenes with high personalization.

Industry Solutions : Existing approaches like PPNet/Poso and on‑device personalization have limitations. The proposed solution deploys a dedicated model per scene in the cloud, achieving extreme personalization while keeping the model generic enough for users, groups, and items.

Meta‑Learning for Personalization : The goal is to provide a personalized model for each user or group without incurring extra cost or performance loss. By keeping a shared model architecture and learning scene‑specific parameters, meta‑learning (especially Model‑Agnostic Meta‑Learning, MAML) enables rapid adaptation to new tasks.

MAML Process : Meta‑training initializes parameters θ , samples tasks and support/query sets, updates θ on the support set, computes loss on the query set, aggregates losses across tasks, and back‑propagates to refine θ . Fine‑tuning follows a similar procedure, applying SGD on task‑specific support data.

Industrial Challenges : Meta‑training requires double sampling (task and sample), massive storage for task‑sample mappings, and high compute overhead. Solutions include batch‑level sample selection, a “load‑and‑release” model storage strategy, and focusing meta‑learning on core network layers while excluding embeddings.

Cross‑Domain Recommendation Pain Points : Multiple recommendation entry points across scenes lead to high cost and data sparsity for small or long‑tail scenes. Feature misalignment between scenes and differing objectives further complicate joint modeling.

Cross‑Domain Solution : Shared embeddings for common features and scene‑specific embeddings for personalized features are combined via shared and personalized experts, gated to fuse information before the tower. This architecture supports various model backbones (e.g., shared bottom, MMoE, PLE) and enables knowledge transfer across scenes, reducing model size, training time, and serving cost.

Cost & Performance Gains : Unified modeling increases sample volume but reduces feature dimensionality, leading to 21% offline processing cost reduction, 24% CPU data‑pull cost saving, and up to 40% faster iteration. Multi‑scene fusion also improves GPU utilization.

Practical Deployment : The meta‑learning framework is encapsulated as reusable components (support‑set I/O, meta‑train/finetune interfaces, GPU serving adapters) within a model zoo, allowing engineers to plug in task‑specific logic without handling low‑level training or serving pipelines.

Overall, meta‑learning and cross‑domain recommendation have demonstrated significant effectiveness and efficiency improvements across Tencent's recommendation systems.

personalizationRecommendation systemsmeta-learningCross-Domainindustrial AIMAML
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.