InfoDCL: Informative Noise Enhanced Diffusion Based Contrastive Learning

πŸ“… 2025-12-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In recommender systems, sparse user-item interactions hinder contrastive learning from capturing sufficient semantic information. Method: This paper proposes DiffCL, the first diffusion-based graph contrastive learning framework. (1) It introduces a semantics-enhanced, single-step controllable noise injection mechanism guided by mutual information to generate more discriminative user preference views. (2) It formulates a joint optimization objective that unifies generative modeling and preference learning, enabling end-to-end co-training of representation learning and view generation. (3) During inference, it dynamically fuses multi-layer GCN representations to capture high-order co-occurrence patterns. Results: DiffCL achieves significant improvements over state-of-the-art methods across five real-world benchmark datasets, demonstrating the effectiveness of semantics-aware diffusion noise in enhancing both recommendation accuracy and generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Contrastive learning has demonstrated promising potential in recommender systems. Existing methods typically construct sparser views by randomly perturbing the original interaction graph, as they have no idea about the authentic user preferences. Owing to the sparse nature of recommendation data, this paradigm can only capture insufficient semantic information. To address the issue, we propose InfoDCL, a novel diffusion-based contrastive learning framework for recommendation. Rather than injecting randomly sampled Gaussian noise, we employ a single-step diffusion process that integrates noise with auxiliary semantic information to generate signals and feed them to the standard diffusion process to generate authentic user preferences as contrastive views. Besides, based on a comprehensive analysis of the mutual influence between generation and preference learning in InfoDCL, we build a collaborative training objective strategy to transform the interference between them into mutual collaboration. Additionally, we employ multiple GCN layers only during inference stage to incorporate higher-order co-occurrence information while maintaining training efficiency. Extensive experiments on five real-world datasets demonstrate that InfoDCL significantly outperforms state-of-the-art methods. Our InfoDCL offers an effective solution for enhancing recommendation performance and suggests a novel paradigm for applying diffusion method in contrastive learning frameworks.
Problem

Research questions and friction points this paper is trying to address.

Enhances contrastive learning with informative noise diffusion
Generates authentic user preferences as contrastive views
Improves recommendation performance via collaborative training strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion process with informative noise for contrastive views
Implements collaborative training objective for mutual enhancement
Applies GCN layers only during inference for efficiency
πŸ”Ž Similar Papers
No similar papers found.
X
Xufeng Liang
Beijing Institute of Technology, Beijing, China
Zhida Qin
Zhida Qin
School of Computer Science and Technology, Beijing Institute of Technology
Recommendation systemsMulti--armed banditinformation diffusion
C
Chong Zhang
Xi’an Jiaotong University, Xi’an, China
T
Tianyu Huang
Beijing Institute of Technology, Beijing, China
G
Gangyi Ding
Beijing Institute of Technology, Beijing, China