🤖 AI Summary
Popularity bias in collaborative filtering causes models to over-recommend popular items, degrading personalization and long-tail coverage. This work establishes, for the first time from a geometric perspective, that the bias arises from the coupling between user/item embedding norms and item frequency in Bayesian Personalized Ranking (BPR) optimization—inducing intrinsic distortion in the embedding space. To address this at its source, we propose the Directional Decomposition and Correction (DDC) framework: it employs asymmetric gradient updates to decouple user preference direction from the global popularity direction, enabling model-agnostic, geometry-aware debiasing. Theoretical analysis rigorously characterizes the geometric nature of the bias. Empirical evaluation demonstrates that DDC reduces training loss to under 5% of baseline levels across diverse BPR-based architectures, while substantially improving recommendation accuracy, long-tail coverage, and fairness.
📝 Abstract
Popularity bias fundamentally undermines the personalization capabilities of collaborative filtering (CF) models, causing them to disproportionately recommend popular items while neglecting users' genuine preferences for niche content. While existing approaches treat this as an external confounding factor, we reveal that popularity bias is an intrinsic geometric artifact of Bayesian Pairwise Ranking (BPR) optimization in CF models. Through rigorous mathematical analysis, we prove that BPR systematically organizes item embeddings along a dominant "popularity direction" where embedding magnitudes directly correlate with interaction frequency. This geometric distortion forces user embeddings to simultaneously handle two conflicting tasks-expressing genuine preference and calibrating against global popularity-trapping them in suboptimal configurations that favor popular items regardless of individual tastes. We propose Directional Decomposition and Correction (DDC), a universally applicable framework that surgically corrects this embedding geometry through asymmetric directional updates. DDC guides positive interactions along personalized preference directions while steering negative interactions away from the global popularity direction, disentangling preference from popularity at the geometric source. Extensive experiments across multiple BPR-based architectures demonstrate that DDC significantly outperforms state-of-the-art debiasing methods, reducing training loss to less than 5% of heavily-tuned baselines while achieving superior recommendation quality and fairness. Code is available in https://github.com/LingFeng-Liu-AI/DDC.