Selected Papers from Kuaishou Community Science Track at SIGIR 2023
This article presents five peer‑reviewed papers accepted to the Kuaishou Community Science track at the 2023 ACM SIGIR conference, covering offline reinforcement learning for recommendation fairness, global residual value for fair item exposure, personalized QoE in short‑form video, multi‑behavior self‑supervised recommendation, and a search‑enhanced sequential recommendation framework.
The International ACM SIGIR conference is the premier venue for intelligent information retrieval research. In its 46th edition (July 23‑27, 2023, Taipei), the Kuaishou Community Science track had 822 long‑paper submissions (20.1% acceptance) and 613 short papers (25.12% acceptance). Five papers were selected for presentation.
Paper 01: Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation Download: https://arxiv.org/abs/2307.04571 Code: https://github.com/chongminggao/DORL-codes Authors: Gao Chongming, Huang Kexin, Chen Jiawei, Zhang Yuan, Li Biao, Jiang Peng, Wang Shiqi, Zhang Zhong, He Xiangnan Abstract: Offline RL suffers from value overestimation, leading to a Matthew effect in recommendation where popular items dominate exposure. The authors propose a de‑biased model‑based offline RL method (DORL) that adds a penalty on the behavior policy to reduce conservatism, thereby mitigating the Matthew effect while preserving user interest.
Paper 02: Measuring Item Global Residual Value for Fair Recommendation Download: http://arxiv.org/abs/2307.08259 Authors: Wang Jiayin, Ma Weizhi, Jiang Chumeng, Zhang Min, Zhang Yuan, Li Biao, Jiang Peng Abstract: The paper shifts focus from user‑side preference modeling to content‑side fairness. It defines a Global Residual Value (GRV) to capture the remaining utility of items over time, integrates GRV into a fairness‑aware recommendation framework (TaFR), and demonstrates improved exposure fairness and recommendation performance on multiple datasets.
Paper 03: Hydrus: Improving Personalized Quality of Experience in Short‑form Video Services Download: https://dl.acm.org/doi/10.1145/3539618.3591696 Authors: Yuan Zhiyu, Ren Kai, Wang Gang, Miao Xin Abstract: Traditional QoE optimization reduces server latency, but short‑video services require a trade‑off between latency and recommendation accuracy. Hydrus formulates resource allocation as a utility‑maximization problem, solving it in milliseconds to balance latency and relevance, resulting in higher click‑through rates and watch time without increasing system cost.
Paper 04: Multi‑behavior Self‑supervised Learning for Recommendation Download: https://arxiv.org/abs/2305.18238 Authors: Xu Jincao, Wang Zhaokun, Wu Cheng, Song Yang, Zheng Kai, Wang Xiaowei, Wang Changping, Zhou Guorui Abstract: The work addresses challenges in multi‑behavior recommendation, where sparse target‑behavior signals and noisy auxiliary signals hinder representation learning. It proposes MBSSL, a graph‑neural‑network‑based model that employs cross‑behavior and intra‑behavior self‑supervision, and introduces a gradient‑mixing adjustment to balance auxiliary and target tasks.
Paper 05: When Search Meets Recommendation: Learning Disentangled Search Representation for Recommendation Download: https://arxiv.org/abs/2305.10822 Authors: Si Zihua, Sun Zhongxiang, Zhang Xiao, Xu Jun, Zang Xiaoxue, Song Yang, Wen Jirong Abstract: To bridge search and recommendation, the authors propose SESRec, a framework that disentangles user interests into search‑similar and search‑dissimilar components via contrastive self‑supervision. Experiments on Kuaishou and Amazon datasets show state‑of‑the‑art performance improvements.
All five papers contribute novel algorithms and empirical insights to the fields of reinforcement learning, fairness, QoE optimization, multi‑behavior modeling, and search‑enhanced recommendation.
Kuaishou Tech
Official Kuaishou tech account, providing real-time updates on the latest Kuaishou technology practices.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.