Intelligent Transportation Search Ranking: From Business Rules to Personalized Ranking Models
This article presents the challenges of travel‑related product search, explains why traditional rule‑based sorting is insufficient, and describes how Alibaba Flypig’s team built a deep‑learning based personalized ranking system—including architecture, model variants, experimental results, and future optimization directions—to improve conversion rates for flight and ticket searches.
Background : Travel‑related products such as flight, train, and bus tickets are highly standardized compared with physical e‑commerce, making user decision factors simple and often driven by time and price. Traditional sorting in the travel domain relies on static business rules, which cannot satisfy diverse and personalized travel needs.
Challenges : 1. User heterogeneity – sparse travel behavior, varying service expectations, and long decision cycles. 2. Information islands – unlike physical goods, travel items lack rich attribute graphs, making recall and relevance modeling difficult. 3. Real‑time dynamics – inventory, pricing, and capacity change constantly, requiring adaptive ranking.
Solution Overview : The team moved from rule‑based sorting to a data‑driven personalized ranking pipeline. The offline pipeline collects logs, preprocesses data on Alibaba Cloud ODPS, trains models with TensorFlow, and deploys them in the TPP online serving environment.
Model Architecture :
Ranking system architecture diagram (data collection → preprocessing → model training → online serving).
Deep Listwise Model (DLM) for listwise scoring, offering diverse results, simulating user decision processes, and low latency.
Deep Choice Model (DCM) using RNN variants (LSTM, bi‑LSTM, Transformer) as encoders and an attention‑based decoder to mimic user selection among flight sequences.
Personalized Flight Ranking Network (PFRN) – a dual‑tower model encoding flight sequences and user behavior sequences, with attention to capture user preferences and a novel Listwise Feature Encoding (LFE) component.
Handling Data Sparsity : User groups are defined by business rules into six categories; group‑level behavior is merged with individual behavior to enrich sparse user signals, significantly boosting ranking performance.
Experimental Results : Compared three model families – rule‑based cheapest, traditional machine‑learning, and recent research‑grade models. Online A/B tests show an overall conversion rate increase of nearly 4% when using the personalized models.
Conclusion & Future Work : The current system establishes a solid transportation ranking framework, incorporating group behavior to alleviate sparsity. Future directions include deeper travel intent understanding, advanced sparsity modeling, and expanding recommendation slots with richer multi‑source content.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.