🤖 AI Summary
To address fairness degradation and diversity loss in graph neural network (GNN)-based recommender systems caused by popularity bias, this paper proposes PBiLoss—a novel loss function that explicitly incorporates fairness objectives into end-to-end optimization for the first time. Methodologically, we design a PopPos/PopNeg dual-path popularity-aware sampling strategy, integrating both threshold-based and threshold-free popularity identification mechanisms to construct a model-agnostic, plug-and-play debiasing framework. Coupled with lightweight GNNs such as LightGCN, PBiLoss enables dynamic adjustment of positive and negative samples based on item popularity. Extensive experiments on multiple real-world datasets demonstrate that our approach significantly reduces PRU and PRI—quantitative fairness metrics—while preserving or even improving accuracy and ranking performance, as measured by Recall@K and NDCG. Thus, PBiLoss achieves a balanced trade-off among fairness, diversity, and recommendation effectiveness.
📝 Abstract
Recommender systems, especially those based on graph neural networks (GNNs), have achieved remarkable success in capturing user-item interaction patterns. However, they remain susceptible to popularity bias--the tendency to over-recommend popular items--resulting in reduced content diversity and compromised fairness. In this paper, we propose PBiLoss, a novel regularization-based loss function designed to counteract popularity bias in graph-based recommender models explicitly. PBiLoss augments traditional training objectives by penalizing the model's inclination toward popular items, thereby encouraging the recommendation of less popular but potentially more personalized content. We introduce two sampling strategies: Popular Positive (PopPos) and Popular Negative (PopNeg), which respectively modulate the contribution of the positive and negative popular items during training. We further explore two methods to distinguish popular items: one based on a fixed popularity threshold and another without any threshold, making the approach flexible and adaptive. Our proposed method is model-agnostic and can be seamlessly integrated into state-of-the-art graph-based frameworks such as LightGCN and its variants. Comprehensive experiments across multiple real-world datasets demonstrate that PBiLoss significantly improves fairness, as demonstrated by reductions in the Popularity-Rank Correlation for Users (PRU) and Popularity-Rank Correlation for Items (PRI), while maintaining or even enhancing standard recommendation accuracy and ranking metrics. These results highlight the effectiveness of directly embedding fairness objectives into the optimization process, providing a practical and scalable solution for balancing accuracy and equitable content exposure in modern recommender systems.