🤖 AI Summary
Existing contrastive learning methods for graph collaborative filtering often ambiguously define “noise,” leading to the erroneous removal of critical user-item interactions, thereby yielding unreliable views and cumbersome augmentation procedures. To address this, we propose SCAR—a behavior-driven, lightweight augmentation method that leverages user-item collaborative signals to generate high-quality pseudo-edges via controllable interaction addition or replacement, preserving essential information while drastically simplifying the augmentation process and enhancing robustness. SCAR seamlessly integrates into contrastive learning frameworks and enables efficient self-supervised training within graph neural networks (GNNs). Extensive experiments on four benchmark datasets demonstrate that SCAR consistently outperforms state-of-the-art contrastive learning and advanced self-supervised recommendation methods—particularly under data sparsity—with superior robustness to hyperparameter variations.
📝 Abstract
Contrastive learning (CL) has been widely used for enhancing the performance of graph collaborative filtering (GCF) for personalized recommendation. Since data augmentation plays a crucial role in the success of CL, previous works have designed augmentation methods to remove noisy interactions between users and items in order to generate effective augmented views. However, the ambiguity in defining ''noisiness'' presents a persistent risk of losing core information and generating unreliable data views, while increasing the overall complexity of augmentation. In this paper, we propose Simple Collaborative Augmentation for Recommendation (SCAR), a novel and intuitive augmentation method designed to maximize the effectiveness of CL for GCF. Instead of removing information, SCAR leverages collaborative signals extracted from user-item interactions to generate pseudo-interactions, which are then either added to or used to replace existing interactions. This results in more robust representations while avoiding the pitfalls of overly complex augmentation modules. We conduct experiments on four benchmark datasets and show that SCAR outperforms previous CL-based GCF methods as well as other state-of-the-art self-supervised learning approaches across key evaluation metrics. SCAR exhibits strong robustness across different hyperparameter settings and is particularly effective in sparse data scenarios.