π€ AI Summary
In recommender systems, sparse user-item interactions hinder contrastive learning from capturing sufficient semantic information. Method: This paper proposes DiffCL, the first diffusion-based graph contrastive learning framework. (1) It introduces a semantics-enhanced, single-step controllable noise injection mechanism guided by mutual information to generate more discriminative user preference views. (2) It formulates a joint optimization objective that unifies generative modeling and preference learning, enabling end-to-end co-training of representation learning and view generation. (3) During inference, it dynamically fuses multi-layer GCN representations to capture high-order co-occurrence patterns. Results: DiffCL achieves significant improvements over state-of-the-art methods across five real-world benchmark datasets, demonstrating the effectiveness of semantics-aware diffusion noise in enhancing both recommendation accuracy and generalization capability.
π Abstract
Contrastive learning has demonstrated promising potential in recommender systems. Existing methods typically construct sparser views by randomly perturbing the original interaction graph, as they have no idea about the authentic user preferences. Owing to the sparse nature of recommendation data, this paradigm can only capture insufficient semantic information. To address the issue, we propose InfoDCL, a novel diffusion-based contrastive learning framework for recommendation. Rather than injecting randomly sampled Gaussian noise, we employ a single-step diffusion process that integrates noise with auxiliary semantic information to generate signals and feed them to the standard diffusion process to generate authentic user preferences as contrastive views. Besides, based on a comprehensive analysis of the mutual influence between generation and preference learning in InfoDCL, we build a collaborative training objective strategy to transform the interference between them into mutual collaboration. Additionally, we employ multiple GCN layers only during inference stage to incorporate higher-order co-occurrence information while maintaining training efficiency. Extensive experiments on five real-world datasets demonstrate that InfoDCL significantly outperforms state-of-the-art methods. Our InfoDCL offers an effective solution for enhancing recommendation performance and suggests a novel paradigm for applying diffusion method in contrastive learning frameworks.