Search Relevance System Architecture and Practices in QQ Browser
This article presents the QQ Browser search relevance team's experience integrating QQ Browser and Sogou search systems, detailing business overview, relevance system evolution, algorithm architecture, evaluation metrics, deep semantic matching, relevance calibration, and model distillation techniques to improve search relevance performance.
The article introduces the concept of search relevance, emphasizing its role as a core task in information retrieval and a key metric for commercial search engines. It outlines the QQ Browser search business, which serves billions of queries daily and handles diverse result types including web pages, images, mini‑programs, and video cards.
It then describes the fusion of the QQ Browser search system with the Sogou search system completed in 2021, resulting in a two‑layer architecture: a general vertical search subsystem and a main search subsystem, both feeding into a top‑level fusion layer that performs heterogeneous ranking.
The algorithmic architecture is broken down into three layers: the recall layer (textual and vector recall), the coarse‑ranking layer (lightweight relevance, semantic, static, and statistical features), and the fine‑ranking layer (multi‑objective scoring including relevance, freshness, quality, and click‑through prediction).
Evaluation combines offline metrics such as Positive‑Negative Ratio (PNR) and Discounted Cumulative Gain (DCG) with online methods like interleaving experiments and expert side‑by‑side assessments.
To address the challenges of heterogeneous data and long‑tail queries, the team introduced relevance calibration to produce globally comparable scores, and a hybrid matrix‑matching model that combines implicit BERT‑based similarity matrices with explicit term‑level matching matrices using CNN aggregation.
The article also discusses deep semantic matching techniques (representation‑based dual‑tower and interaction‑based BERT models) and the adoption of relevance‑matching concepts to improve keyword precision.
Model compression is tackled via multi‑step knowledge distillation, introducing Teacher‑Assistant (TA) models to bridge the gap between large teachers and small students, resulting in 5% offline and 1‑2% online improvements in key metrics.
Finally, the authors summarize the system’s evolution, emphasizing the importance of explainable features, unified relevance standards, and ongoing investment in AI research to continuously enhance search relevance.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.