Algorithmic Approaches for Hotel Category Planning, Group Recommendation, and Large‑Promotion Selection in Fliggy Travel
This article presents Fliggy Travel's end‑to‑end algorithmic solutions for hotel category planning, introduces the LINet group‑recommendation model that incorporates location and travel intent, and details the PETS two‑stage model for selecting hot‑sale hotels under recall constraints, together with experimental results and practical insights.
The presentation begins with an overview of the business background of hotel category planning at Fliggy Travel, describing the two‑stage operational flow (supply side and consumption side) and identifying the bottleneck in the first stage where high‑value hotel lists must be generated for business development.
Three typical category‑planning schemes are discussed: (1) time‑series forecasting using models such as Prophet, MQ‑RNN, DeepAR, and TFT, which ignores user interaction and location information; (2) personalized recommendation based on user‑item behavior sequences, limited by the lack of user requests at the modeling stage; and (3) group recommendation (e.g., AGREE, SIGR, GroupSA) that captures group preferences but overlooks hotel attributes, travel time, location, and intent.
The LINet model is then introduced as a novel group‑recommendation framework that was accepted at WWW2023. The problem is defined as a "when‑where‑why" (WWW) task: recommending hotels to user groups sharing similar travel intent within a specific time and region. Key challenges include accurate group segmentation, leveraging the three WWW dimensions, and handling data sparsity.
LINet’s architecture consists of six sub‑structures: (1) travel‑intent recognition and group segmentation; (2) internal local preference representation using attention‑based aggregation of group members and a DIN‑style short‑term preference module; (3) internal global preference representation built on a weighted heterogeneous graph of long‑term group‑hotel interactions; (4) external location‑time representation that models seasonal cycles with a PRM module and a monthly POI memory matrix; (5) a neural collaborative filtering (NCF) layer for learning group preferences; and (6) loss functions that combine cross‑entropy, contrastive loss, and a RAML‑based reward‑augmented likelihood for the truncation stage. The model outputs are evaluated offline with Hit and Precision, showing superior performance over pure time‑series methods, and online A/B tests demonstrate a 3% increase in room‑night revenue.
The PETS model addresses the large‑promotion hotel selection problem under recall constraints. It formulates a two‑stage pipeline: a scoring stage (binary classification of potential hot‑sale hotels) and a truncation stage (using a Transformer to predict the optimal cut‑off position k that maximizes F1 while satisfying recall). The scoring stage incorporates three signals—historical hot‑sale occurrence, recent hot‑sale trends via a GRU predictor, and similarity to other hotels using item‑level attention and category‑level generalization—to mitigate cold‑start and data‑sparsity issues. The truncation stage refines the score tokens with additional features (hotel category, business‑district statistics, bias calibration) and predicts k via a softmax over Transformer outputs. Losses combine cross‑entropy, contrastive loss, and RAML‑based F1 distribution loss.
Experimental results show that PETS improves hit rate and precision compared with baseline time‑series methods, and the two‑stage design effectively balances recall and precision in large‑scale promotion scenarios. The Q&A section confirms that the framework can be adapted to other domains such as restaurant menu selection, where similar cold‑start and candidate‑set constraints exist.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.