Correcting for Popularity Bias in Recommender Systems via Item Loss Equalization

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Popularity bias in recommender systems leads to insufficient exposure of long-tail items, low satisfaction among niche-interest users, and exacerbated inter-group recommendation unfairness. To address this, we propose a dynamic loss-balancing method grounded in fair empirical risk minimization (FERM). First, items are grouped into fine-grained buckets based on popularity. Second, a weighted loss-balancing regularizer is introduced to constrain the variance of group-wise losses, enabling unbiased training. Third, an end-to-end differentiable optimization framework is constructed. This work is the first to systematically integrate FERM principles into recommendation systems. Experiments on two real-world datasets demonstrate substantial improvements: long-tail item exposure and niche-user satisfaction increase significantly, fairness metrics improve by 32%, while accuracy degradation remains below 0.5%. The approach thus achieves a favorable trade-off between group-level fairness and individual-level performance.

Technology Category

Application Category

📝 Abstract
Recommender Systems (RS) often suffer from popularity bias, where a small set of popular items dominate the recommendation results due to their high interaction rates, leaving many less popular items overlooked. This phenomenon disproportionately benefits users with mainstream tastes while neglecting those with niche interests, leading to unfairness among users and exacerbating disparities in recommendation quality across different user groups. In this paper, we propose an in-processing approach to address this issue by intervening in the training process of recommendation models. Drawing inspiration from fair empirical risk minimization in machine learning, we augment the objective function of the recommendation model with an additional term aimed at minimizing the disparity in loss values across different item groups during the training process. Our approach is evaluated through extensive experiments on two real-world datasets and compared against state-of-the-art baselines. The results demonstrate the superior efficacy of our method in mitigating the unfairness of popularity bias while incurring only negligible loss in recommendation accuracy.
Problem

Research questions and friction points this paper is trying to address.

Correcting popularity bias in recommender systems
Addressing unfairness in recommendation quality across user groups
Minimizing loss disparity among item groups during training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equalizing item loss to reduce bias
Augmenting objective function for fairness
In-processing approach for recommendation fairness
🔎 Similar Papers
No similar papers found.
J
Juno Prent
University of Amsterdam, Amsterdam, Netherlands
M
M. Mansoury
Delft University of Technology, Delft, Netherlands