LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Recommender systems (RSs) are vulnerable to low-knowledge shilling attacks, yet existing methods rely on internal RS parameters and neglect review manipulation. Method: This paper proposes Agent4SR—a multi-stage LLM-based user agent framework that jointly generates plausible, goal-directed fake ratings and semantically coherent reviews without accessing RS internals. Contribution/Results: Agent4SR innovatively introduces a cross-review feature propagation mechanism and a hybrid memory retrieval strategy to jointly optimize behavioral plausibility and attack efficacy; it further incorporates targeted user profiling and semantic review manipulation. Evaluated on multiple public datasets and mainstream RS models, Agent4SR achieves up to a 37% higher attack success rate and a 22% lower detection rate compared to state-of-the-art low-knowledge attacks, demonstrating substantial improvements in both stealth and effectiveness.

Technology Category

Application Category

📝 Abstract
Recommender systems (RS) are increasingly vulnerable to shilling attacks, where adversaries inject fake user profiles to manipulate system outputs. Traditional attack strategies often rely on simplistic heuristics, require access to internal RS data, and overlook the manipulation potential of textual reviews. In this work, we introduce Agent4SR, a novel framework that leverages Large Language Model (LLM)-based agents to perform low-knowledge, high-impact shilling attacks through both rating and review generation. Agent4SR simulates realistic user behavior by orchestrating adversarial interactions, selecting items, assigning ratings, and crafting reviews, while maintaining behavioral plausibility. Our design includes targeted profile construction, hybrid memory retrieval, and a review attack strategy that propagates target item features across unrelated reviews to amplify manipulation. Extensive experiments on multiple datasets and RS architectures demonstrate that Agent4SR outperforms existing low-knowledge baselines in both effectiveness and stealth. Our findings reveal a new class of emergent threats posed by LLM-driven agents, underscoring the urgent need for enhanced defenses in modern recommender systems.
Problem

Research questions and friction points this paper is trying to address.

LLM-based agents simulate shilling attacks on recommender systems
Attacks manipulate ratings and reviews with low knowledge
Reveals emergent threats needing stronger recommender defenses
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based agents simulate realistic user behavior
Hybrid memory retrieval enhances attack plausibility
Review attack strategy propagates target item features
🔎 Similar Papers
No similar papers found.
S
Shengkang Gu
Fudan University
J
Jiahao Liu
Fudan University
D
Dongsheng Li
Microsoft Research Asia
G
Guangping Zhang
Fudan University
Mingzhe Han
Mingzhe Han
Fudan University
Machine learning
H
Hansu Gu
Independent
P
Peng Zhang
Fudan University
N
Ning Gu
Fudan University
L
Li Shang
Fudan University
T
Tun Lu
Fudan University