🤖 AI Summary
Collaborative filtering suffers from recommendation bias toward tail items and inactive users due to the long-tailed distribution of user–item interactions. This work, from a representation learning perspective, first identifies two critical issues in the embedding space of recommender models: (i) group disparity—distributional shifts across user/item subgroups—and (ii) global collapse—undesirable concentration of embeddings. To address these, we propose a dual-regularization framework comprising Group Alignment (to mitigate inter-subgroup distribution shifts) and Global Uniformity (to prevent embedding collapse), explicitly optimizing embedding distribution properties without compromising overall accuracy. Our method is plug-and-play, compatible with mainstream recommendation backbones, and trained end-to-end. Extensive experiments on three real-world datasets demonstrate significant improvements for tail-item and inactive-user recommendations, achieving an average 12.7% gain in Recall@20, while maintaining or slightly improving performance on head items.
📝 Abstract
Collaborative Filtering~(CF) plays a crucial role in modern recommender systems, leveraging historical user-item interactions to provide personalized suggestions. However, CF-based methods often encounter biases due to imbalances in training data. This phenomenon makes CF-based methods tend to prioritize recommending popular items and performing unsatisfactorily on inactive users. Existing works address this issue by rebalancing training samples, reranking recommendation results, or making the modeling process robust to the bias. Despite their effectiveness, these approaches can compromise accuracy or be sensitive to weighting strategies, making them challenging to train. In this paper, we deeply analyze the causes and effects of the biases and propose a framework to alleviate biases in recommendation from the perspective of representation distribution, namely Group-Alignment and Global-Uniformity Enhanced Representation Learning for Debiasing Recommendation (AURL). Specifically, we identify two significant problems in the representation distribution of users and items, namely group-discrepancy and global-collapse. These two problems directly lead to biases in the recommendation results. To this end, we propose two simple but effective regularizers in the representation space, respectively named group-alignment and global-uniformity. The goal of group-alignment is to bring the representation distribution of long-tail entities closer to that of popular entities, while global-uniformity aims to preserve the information of entities as much as possible by evenly distributing representations. Our method directly optimizes both the group-alignment and global-uniformity regularization terms to mitigate recommendation biases. Extensive experiments on three real datasets and various recommendation backbones verify the superiority of our proposed framework.