Artificial Intelligence 8 min read

How Recommendation Models Amplify Popularity Bias: A Spectral Perspective and the ReSN Correction Method

The award‑winning WSDM 2025 paper reveals how recommendation models magnify popularity bias through spectral alignment of the rating matrix’s leading singular vector with item popularity, and proposes the ReSN regularization technique to mitigate this bias efficiently.

AntTech
AntTech
AntTech
How Recommendation Models Amplify Popularity Bias: A Spectral Perspective and the ReSN Correction Method

The 18th International Conference on Web Search and Data Mining (WSDM 2025) awarded a Best Paper to a Chinese team for their work titled “How Do Recommendation Models Amplify Popularity Bias? An Analysis from the Spectral Perspective.” The paper investigates why recommendation systems tend to over‑recommend popular items.

The authors identify two key phenomena: a "popularity memory effect" where the leading singular vector of the rating matrix closely matches the item popularity vector, and a "popularity amplification effect" caused by dimensionality reduction in low‑rank embeddings, which magnifies bias.

Through singular value decomposition (SVD) they show that for an n‑by‑m rating matrix \(Y\), the top right singular vector \(q_1\) captures the popularity vector \(r\). When item popularity follows a power‑law distribution with exponent \(\alpha\), the alignment approaches 1 as \(\alpha\) grows, confirming the empirical observation that \(q_1\) memorizes popularity.

The amplification effect is traced to low‑rank approximations: embedding dimensions for users and items are deliberately small, causing the largest singular value to dominate during training, while smaller singular values grow slowly. This leads the model to rely heavily on the popularity‑related component.

To counteract the bias, the authors propose ReSN (Regulating with Spectral Norm), a regularization term that penalizes the spectral norm (largest singular value) of the rating matrix. The modified loss is \(L = L_{rec} + \lambda \|Y\|_{\sigma}\), where \(L_{rec}\) is the original recommendation loss and \(\lambda\) controls regularization strength.

Because directly computing the spectral norm of a large matrix is costly, the paper introduces two tricks: (1) approximate the norm by the L2 norm of the top singular vector, leveraging its alignment with the popularity vector; (2) enforce the norm on the low‑rank factorization \(Y = UV^T\) instead of the full matrix, drastically reducing computation.

Extensive experiments on seven real‑world datasets demonstrate that ReSN consistently outperforms baseline debiasing methods in both accuracy and fairness. Pareto curves show that ReSN achieves higher recommendation quality at the same level of fairness, and vice versa.

In summary, the study uncovers the spectral roots of popularity bias in recommender systems and offers an efficient, theoretically‑grounded regularization technique that mitigates the bias without sacrificing performance.

machine learningRecommendation systemspopularity biasfairnessregularizationspectral analysis
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.