🤖 AI Summary
Recommender systems (RSs) are vulnerable to low-knowledge shilling attacks, yet existing methods rely on internal RS parameters and neglect review manipulation. Method: This paper proposes Agent4SR—a multi-stage LLM-based user agent framework that jointly generates plausible, goal-directed fake ratings and semantically coherent reviews without accessing RS internals. Contribution/Results: Agent4SR innovatively introduces a cross-review feature propagation mechanism and a hybrid memory retrieval strategy to jointly optimize behavioral plausibility and attack efficacy; it further incorporates targeted user profiling and semantic review manipulation. Evaluated on multiple public datasets and mainstream RS models, Agent4SR achieves up to a 37% higher attack success rate and a 22% lower detection rate compared to state-of-the-art low-knowledge attacks, demonstrating substantial improvements in both stealth and effectiveness.
📝 Abstract
Recommender systems (RS) are increasingly vulnerable to shilling attacks, where adversaries inject fake user profiles to manipulate system outputs. Traditional attack strategies often rely on simplistic heuristics, require access to internal RS data, and overlook the manipulation potential of textual reviews. In this work, we introduce Agent4SR, a novel framework that leverages Large Language Model (LLM)-based agents to perform low-knowledge, high-impact shilling attacks through both rating and review generation. Agent4SR simulates realistic user behavior by orchestrating adversarial interactions, selecting items, assigning ratings, and crafting reviews, while maintaining behavioral plausibility. Our design includes targeted profile construction, hybrid memory retrieval, and a review attack strategy that propagates target item features across unrelated reviews to amplify manipulation. Extensive experiments on multiple datasets and RS architectures demonstrate that Agent4SR outperforms existing low-knowledge baselines in both effectiveness and stealth. Our findings reveal a new class of emergent threats posed by LLM-driven agents, underscoring the urgent need for enhanced defenses in modern recommender systems.