When Graph Contrastive Learning Backfires: Spectral Vulnerability and Defense in Recommendation

πŸ“… 2025-07-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work reveals that graph contrastive learning (GCL), while enhancing recommendation robustness, inadvertently increases vulnerability to target-item promotion attacks. The root cause lies in spectral smoothing induced by contrastive optimization, which over-homogenizes item embedding distributions and unintentionally amplifies the exposure of target items in the representation space. To address this, the authors first model the vulnerability from a spectral graph theory perspective and propose CLeaRβ€”a novel, targeted attack method exploiting this spectral flaw. They further design SIM, a lightweight defense framework incorporating spectral irregularity regularization and a bilevel optimization mechanism to suppress malicious promotion without degrading recommendation performance. Theoretical analysis and extensive experiments demonstrate that CLeaR significantly improves attack success rates, while SIM effectively detects and mitigates such attacks, preserving both model accuracy and generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Graph Contrastive Learning (GCL) has demonstrated substantial promise in enhancing the robustness and generalization of recommender systems, particularly by enabling models to leverage large-scale unlabeled data for improved representation learning. However, in this paper, we reveal an unexpected vulnerability: the integration of GCL inadvertently increases the susceptibility of a recommender to targeted promotion attacks. Through both theoretical investigation and empirical validation, we identify the root cause as the spectral smoothing effect induced by contrastive optimization, which disperses item embeddings across the representation space and unintentionally enhances the exposure of target items. Building on this insight, we introduce CLeaR, a bi-level optimization attack method that deliberately amplifies spectral smoothness, enabling a systematic investigation of the susceptibility of GCL-based recommendation models to targeted promotion attacks. Our findings highlight the urgent need for robust countermeasures; in response, we further propose SIM, a spectral irregularity mitigation framework designed to accurately detect and suppress targeted items without compromising model performance. Extensive experiments on multiple benchmark datasets demonstrate that, compared to existing targeted promotion attacks, GCL-based recommendation models exhibit greater susceptibility when evaluated with CLeaR, while SIM effectively mitigates these vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

GCL increases recommender vulnerability to targeted promotion attacks
Spectral smoothing effect disperses item embeddings unintentionally
Proposes defenses to detect and suppress targeted items effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLeaR method amplifies spectral smoothness for attack
SIM framework detects and suppresses targeted items
Bi-level optimization investigates GCL model vulnerabilities
πŸ”Ž Similar Papers
No similar papers found.