π€ AI Summary
This work reveals that graph contrastive learning (GCL), while enhancing recommendation robustness, inadvertently increases vulnerability to target-item promotion attacks. The root cause lies in spectral smoothing induced by contrastive optimization, which over-homogenizes item embedding distributions and unintentionally amplifies the exposure of target items in the representation space. To address this, the authors first model the vulnerability from a spectral graph theory perspective and propose CLeaRβa novel, targeted attack method exploiting this spectral flaw. They further design SIM, a lightweight defense framework incorporating spectral irregularity regularization and a bilevel optimization mechanism to suppress malicious promotion without degrading recommendation performance. Theoretical analysis and extensive experiments demonstrate that CLeaR significantly improves attack success rates, while SIM effectively detects and mitigates such attacks, preserving both model accuracy and generalization capability.
π Abstract
Graph Contrastive Learning (GCL) has demonstrated substantial promise in enhancing the robustness and generalization of recommender systems, particularly by enabling models to leverage large-scale unlabeled data for improved representation learning. However, in this paper, we reveal an unexpected vulnerability: the integration of GCL inadvertently increases the susceptibility of a recommender to targeted promotion attacks. Through both theoretical investigation and empirical validation, we identify the root cause as the spectral smoothing effect induced by contrastive optimization, which disperses item embeddings across the representation space and unintentionally enhances the exposure of target items. Building on this insight, we introduce CLeaR, a bi-level optimization attack method that deliberately amplifies spectral smoothness, enabling a systematic investigation of the susceptibility of GCL-based recommendation models to targeted promotion attacks. Our findings highlight the urgent need for robust countermeasures; in response, we further propose SIM, a spectral irregularity mitigation framework designed to accurately detect and suppress targeted items without compromising model performance. Extensive experiments on multiple benchmark datasets demonstrate that, compared to existing targeted promotion attacks, GCL-based recommendation models exhibit greater susceptibility when evaluated with CLeaR, while SIM effectively mitigates these vulnerabilities.