SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation

πŸ“… 2025-07-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing graph-based recommendation methods couple self-supervised graph learning with supervised learning in multi-task frameworks, but suffer from gradient conflicts due to heterogeneous loss functions and incur substantial computational overhead from redundant graph convolutions, limiting both training efficiency and recommendation performance. To address these issues, we propose SGCL (Supervised Graph Contrastive Learning), the first approach that unifies the supervised recommendation objective and contrastive learning into a single supervised contrastive lossβ€”thereby eliminating gradient direction inconsistency. Leveraging the user-item bipartite graph, SGCL employs a lightweight graph neural network to capture high-order collaborative signals without unnecessary message propagation. Extensive experiments on three real-world datasets demonstrate that SGCL consistently outperforms state-of-the-art methods: it improves Recall@20 by 3.2–7.8% and accelerates training by 1.9–2.4Γ—, achieving both superior efficiency and strong generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Recommender systems (RecSys) are essential for online platforms, providing personalized suggestions to users within a vast sea of information. Self-supervised graph learning seeks to harness high-order collaborative filtering signals through unsupervised augmentation on the user-item bipartite graph, primarily leveraging a multi-task learning framework that includes both supervised recommendation loss and self-supervised contrastive loss. However, this separate design introduces additional graph convolution processes and creates inconsistencies in gradient directions due to disparate losses, resulting in prolonged training times and sub-optimal performance. In this study, we introduce a unified framework of Supervised Graph Contrastive Learning for recommendation (SGCL) to address these issues. SGCL uniquely combines the training of recommendation and unsupervised contrastive losses into a cohesive supervised contrastive learning loss, aligning both tasks within a single optimization direction for exceptionally fast training. Extensive experiments on three real-world datasets show that SGCL outperforms state-of-the-art methods, achieving superior accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Unifying self-supervised and supervised graph learning for recommendations
Resolving gradient inconsistencies from separate loss functions
Improving training speed and recommendation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies supervised and self-supervised graph learning
Combines recommendation and contrastive losses cohesively
Aligns optimization direction for faster training
πŸ”Ž Similar Papers
No similar papers found.