π€ AI Summary
Recommender systems are vulnerable to shilling attacks, where adversaries inject fake users to artificially boost target itemsβ rankings. However, existing attack methods struggle to simultaneously achieve high attack strength and behavioral realism, leading to underestimation of security risks. This paper proposes DLDAβthe first shilling attack framework leveraging conditional implicit diffusion models to generate highly realistic and controllable fake users within the collaborative embedding space. By incorporating pre-aligned embedding modeling and dispersion regularization, DLDA significantly enhances adversarial stealth and robustness against detection while preserving strong generalization capability. Extensive experiments across three real-world datasets and five state-of-the-art recommendation models demonstrate that DLDA achieves higher attack success rates and lower detection rates than prior approaches, uncovering more severe security vulnerabilities in modern recommender systems.
π Abstract
Recommender systems (RSs) are now fundamental to various online platforms, but their dependence on user-contributed data leaves them vulnerable to shilling attacks that can manipulate item rankings by injecting fake users. Although widely studied, most existing attack models fail to meet two critical objectives simultaneously: achieving strong adversarial promotion of target items while maintaining realistic behavior to evade detection. As a result, the true severity of shilling threats that manage to reconcile the two objectives remains underappreciated. To expose this overlooked vulnerability, we present DLDA, a diffusion-based attack framework that can generate highly effective yet indistinguishable fake users by enabling fine-grained control over target promotion. Specifically, DLDA operates in a pre-aligned collaborative embedding space, where it employs a conditional latent diffusion process to iteratively synthesize fake user profiles with precise target item control. To evade detection, DLDA introduces a dispersive regularization mechanism that promotes variability and realism in generated behavioral patterns. Extensive experiments on three real-world datasets and five popular RS models demonstrate that, compared to prior attacks, DLDA consistently achieves stronger item promotion while remaining harder to detect. These results highlight that modern RSs are more vulnerable than previously recognized, underscoring the urgent need for more robust defenses.