🤖 AI Summary
Graph convolutional recommendation systems tend to amplify data bias associated with sensitive attributes during graph propagation, thereby degrading fairness. Existing approaches either neglect the impact of such bias on representation learning or employ coarse-grained data augmentation that compromises user preferences and recommendation utility. This paper proposes a fairness-aware dual data augmentation framework that jointly optimizes graph structure and node representations by identifying sensitive interactions and analyzing feature similarity—effectively mitigating bias propagation without perturbing the original preference distribution. The method integrates fairness-aware modeling, debiased learning, and joint structural-feature augmentation. Experiments on two real-world datasets demonstrate that our approach significantly improves both individual and group fairness (by 12.7%–28.3%) while preserving—and even slightly enhancing—recommendation accuracy (Recall@20 +0.9%), achieving a synergistic optimization of fairness and utility.
📝 Abstract
Graph Convolutional Networks (GCNs) have become increasingly popular in recommendation systems. However, recent studies have shown that GCN-based models will cause sensitive information to disseminate widely in the graph structure, amplifying data bias and raising fairness concerns. While various fairness methods have been proposed, most of them neglect the impact of biased data on representation learning, which results in limited fairness improvement. Moreover, some studies have focused on constructing fair and balanced data distributions through data augmentation, but these methods significantly reduce utility due to disruption of user preferences. In this paper, we aim to design a fair recommendation method from the perspective of data augmentation to improve fairness while preserving recommendation utility. To achieve fairness-aware data augmentation with minimal disruption to user preferences, we propose two prior hypotheses. The first hypothesis identifies sensitive interactions by comparing outcomes of performance-oriented and fairness-aware recommendations, while the second one focuses on detecting sensitive features by analyzing feature similarities between biased and debiased representations. Then, we propose a dual data augmentation framework for fair recommendation, which includes two data augmentation strategies to generate fair augmented graphs and feature representations. Furthermore, we introduce a debiasing learning method that minimizes the dependence between the learned representations and sensitive information to eliminate bias. Extensive experiments on two real-world datasets demonstrate the superiority of our proposed framework.