Artificial Intelligence 15 min read

Practical Application of TensorFlow Ranking (TFR) in iQIYI Overseas Recommendation System

This article describes how iQIYI's overseas recommendation team adopted TensorFlow Ranking to replace traditional CTR models with Learning‑to‑Rank, detailing the framework’s architecture, challenges such as regularization and sequence feature support, the solutions implemented, and experimental results showing significant performance gains.

DataFunTalk
DataFunTalk
DataFunTalk
Practical Application of TensorFlow Ranking (TFR) in iQIYI Overseas Recommendation System

Introduction In modern internet services, recommendation systems are crucial for content distribution. iQIYI overseas introduced TensorFlow Ranking (TFR) to improve ranking effectiveness, moving from traditional CTR estimation to Learning‑to‑Rank (LTR) methods.

Algorithm Evolution: From CTR to LTR CTR models estimate the click probability of a single item, which does not reflect the true ranking problem where multiple items are shown simultaneously. LTR treats the problem as ordering a set of items, using pairwise or listwise training data and evaluation metrics such as NDCG, MAP, and ARP.

Framework Design: TensorFlow Ranking (TFR) TFR is an official TensorFlow library that abstracts LTR components—losses, metrics, and data handling—into high‑level APIs. It integrates with TensorFlow Estimator via make_groupwise_ranking_fn , which encapsulates the model function, loss, and metrics, allowing developers to focus on the scoring function while the framework manages ranking‑specific logic.

Implementation Details The training pipeline follows: raw listwise data → user‑defined feature_columns → transform_fn for feature conversion → scoring function → ranking_head computes loss and metrics → optimizer updates the model. The framework’s source code organizes these steps across modules such as losses.py , metrics.py , data.py , feature.py , head.py , and model.py .

Problems Encountered and Solutions 1. Regularization not supported : Because loss computation resides inside TFR, adding regularization required extending GroupwiseRankingModel to expose model parameters. The team patched the source to pass regularization_losses to the loss function. 2. Sequence features unsupported : TFR’s _transform_fn only handled numeric and categorical columns. The solution involved modifying feature.py to encode sequence_categorical_column types, enabling sequence features in context_features .

Experiment: LTR vs. Native Model Three traffic groups were tested: BaseB (no ranking), Ranking (native TensorFlow Estimator), and TfrRankingB (TFR‑based LTR). All used identical network structures and data. Online A/B tests over four days showed that TfrRankingB outperformed Ranking, which in turn outperformed BaseB on CTR, UCTR, and LPLAY metrics.

Conclusion Using TFR greatly simplifies LTR model development and yields measurable improvements in recommendation quality. Future work includes migrating to TensorFlow 2.x and monitoring TFR’s compatibility with the newer runtime.

References 1. Burges C.J.C., "From RankNet to LambdaRank to LambdaMART: An Overview", 2010. 2. Liu T.Y., "Learning to Rank for Information Retrieval", 2011. 3. Pasumarthi et al., "TF‑Ranking: Scalable TensorFlow Library for Learning‑to‑Rank", KDD 2019. 4. Krichene & Rendle, "On Sampled Metrics for Item Recommendation", KDD 2020. 5. https://github.com/tensorflow/ranking

artificial intelligenceMachine LearningRecommendation systemsranking algorithmslearning to rankTensorFlow Ranking
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.