Artificial Intelligence 13 min read

Evolution of Ele.me Recommendation Algorithms and Online Learning Practice

This article outlines the background of Ele.me's recommendation business, details the evolution of its recommendation algorithms from rule‑based models to deep learning and online learning, and explains the practical implementation of real‑time data pipelines, feature engineering, model training, and deployment.

DataFunTalk
DataFunTalk
DataFunTalk
Evolution of Ele.me Recommendation Algorithms and Online Learning Practice

Ele.me's recommendation system serves as the main traffic entry for the food delivery app, covering homepage, category, and search, handling over 90% of orders.

1. Recommendation Business Background

The recommendation product includes homepage, category, and search slots, driving the majority of the platform's orders. Optimization goals evolve with business stages, focusing on click‑through rate, conversion, GMV, and user satisfaction by decomposing high‑level objectives into sub‑models.

2. Algorithm Evolution

2.1 Data & Feature Upgrades

Data pipelines were upgraded from batch to real‑time using Flume and Kafka, enabling online feature generation and eliminating feature leakage. Feature coverage expanded with multi‑dimensional real‑time features, large‑scale sparse vectors, and vector representations for items, users, and queries. Monitoring was added for feature drift, anomalies, and data quality.

2.2 Model Upgrades

Early models relied on manual rule‑based scoring. In 2016 a simple LR linear model introduced learned factor weights, improving CTR by ~10%. Later, non‑linear models such as GBDT, FM, and XGBoost were adopted, boosting performance further. By 2017 deep learning models (Wide&Deep, DeepFM) were integrated, providing end‑to‑end feature learning and higher‑order interactions.

3. Online Learning Practice

3.1 Characteristics of Online Learning

Online learning addresses data distribution shifts in the fast‑changing food‑delivery domain by continuously updating model parameters with streaming samples, avoiding the need to store massive offline datasets.

3.2 Theoretical Basis

The implementation follows Google’s Wide&Deep and DeepFM papers, achieving production‑grade latency and accuracy for e‑commerce recommendation.

3.3 Technical Stack

The stack includes real‑time data collection via Storm, feature services, model training with FTRL, parameter snapshots stored in Redis, and an online prediction service that periodically pulls the latest parameters.

3.4 Online Learning Workflow

Real‑time effect attribution → online model training (FTRL) → parameter snapshot (Redis) → online prediction. The workflow forms a closed loop by joining logs from user behavior, server, and orders.

3.5 Practical Tips

Sampling strategies (position truncation, business filtering) reduce noise; timed parameter updates (e.g., every 5 minutes) balance stability and freshness; handling sample imbalance via caching and weighted mixing; input normalization accelerates convergence; visualization/debug tools provide real‑time ranking and feature weight inspection; real‑time A/B testing compares algorithm versions, entry points, list positions, and feature effects.

Author: Liu Jin, Ele.me algorithm expert with extensive experience in building real‑time recommendation systems.

Machine LearningFeature Engineeringrecommendationreal-time dataonline learningEle.me
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.