Controllable and Stealthy Shilling Attacks via Dispersive Latent Diffusion

πŸ“… 2025-08-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Recommender systems are vulnerable to shilling attacks, where adversaries inject fake users to artificially boost target items’ rankings. However, existing attack methods struggle to simultaneously achieve high attack strength and behavioral realism, leading to underestimation of security risks. This paper proposes DLDAβ€”the first shilling attack framework leveraging conditional implicit diffusion models to generate highly realistic and controllable fake users within the collaborative embedding space. By incorporating pre-aligned embedding modeling and dispersion regularization, DLDA significantly enhances adversarial stealth and robustness against detection while preserving strong generalization capability. Extensive experiments across three real-world datasets and five state-of-the-art recommendation models demonstrate that DLDA achieves higher attack success rates and lower detection rates than prior approaches, uncovering more severe security vulnerabilities in modern recommender systems.

Technology Category

Application Category

πŸ“ Abstract
Recommender systems (RSs) are now fundamental to various online platforms, but their dependence on user-contributed data leaves them vulnerable to shilling attacks that can manipulate item rankings by injecting fake users. Although widely studied, most existing attack models fail to meet two critical objectives simultaneously: achieving strong adversarial promotion of target items while maintaining realistic behavior to evade detection. As a result, the true severity of shilling threats that manage to reconcile the two objectives remains underappreciated. To expose this overlooked vulnerability, we present DLDA, a diffusion-based attack framework that can generate highly effective yet indistinguishable fake users by enabling fine-grained control over target promotion. Specifically, DLDA operates in a pre-aligned collaborative embedding space, where it employs a conditional latent diffusion process to iteratively synthesize fake user profiles with precise target item control. To evade detection, DLDA introduces a dispersive regularization mechanism that promotes variability and realism in generated behavioral patterns. Extensive experiments on three real-world datasets and five popular RS models demonstrate that, compared to prior attacks, DLDA consistently achieves stronger item promotion while remaining harder to detect. These results highlight that modern RSs are more vulnerable than previously recognized, underscoring the urgent need for more robust defenses.
Problem

Research questions and friction points this paper is trying to address.

Expose vulnerability in recommender systems to stealthy shilling attacks
Achieve strong adversarial promotion while evading detection mechanisms
Develop a diffusion-based framework for realistic fake user generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based attack framework DLDA
Conditional latent diffusion process
Dispersive regularization for evasion
πŸ”Ž Similar Papers
No similar papers found.
S
Shutong Qiao
University of Queensland, Brisbane, Australia
W
Wei Yuan
University of Queensland, Brisbane, Australia
Junliang Yu
Junliang Yu
The University of Queensland
Data-Centric AILLM AgentRecommender SystemsGraph Learning
T
Tong Chen
University of Queensland, Brisbane, Australia
Q
Quoc Viet Hung Nguyen
Griffith University, Gold Coast, Australia
Hongzhi Yin
Hongzhi Yin
Professor and ARC Future Fellow, University of Queensland
Recommender SystemGraph LearningSpatial-temporal PredictionEdge IntelligenceLLM