Retrieval-Augmented Review Generation for Poisoning Recommender Systems

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data poisoning attacks against black-box recommender systems face significant challenges under resource-constrained and information-scarce conditions. Method: This paper proposes RAGAN, a novel framework that integrates Retrieval-Augmented Generation (RAG) with style transfer, leveraging the in-context learning capability of multimodal large language models to jointly optimize both the quality and stealthiness of synthetic user reviews. Through prompt engineering, instruction agents, and a guardian model, RAGAN generates high-fidelity adversarial reviews while preserving semantic authenticity and user-specific stylistic consistency. Contribution/Results: Extensive experiments on multiple real-world datasets demonstrate that RAGAN achieves state-of-the-art poisoning performance—significantly improving attack success rates and cross-platform transferability—while simultaneously enhancing undetectability.

Technology Category

Application Category

📝 Abstract
Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks, where malicious actors inject fake user profiles, including a group of well-designed fake ratings, to manipulate recommendations. Due to security and privacy constraints in practice, attackers typically possess limited knowledge of the victim system and thus need to craft profiles that have transferability across black-box RSs. To maximize the attack impact, the profiles often remains imperceptible. However, generating such high-quality profiles with the restricted resources is challenging. Some works suggest incorporating fake textual reviews to strengthen the profiles; yet, the poor quality of the reviews largely undermines the attack effectiveness and imperceptibility under the practical setting. To tackle the above challenges, in this paper, we propose to enhance the quality of the review text by harnessing in-context learning (ICL) capabilities of multimodal foundation models. To this end, we introduce a demonstration retrieval algorithm and a text style transfer strategy to augment the navie ICL. Specifically, we propose a novel practical attack framework named RAGAN to generate high-quality fake user profiles, which can gain insights into the robustness of RSs. The profiles are generated by a jailbreaker and collaboratively optimized on an instructional agent and a guardian to improve the attack transferability and imperceptibility. Comprehensive experiments on various real-world datasets demonstrate that RAGAN achieves the state-of-the-art poisoning attack performance.
Problem

Research questions and friction points this paper is trying to address.

Generating high-quality fake reviews for poisoning attacks
Enhancing attack transferability across black-box recommender systems
Improving imperceptibility of malicious user profiles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses in-context learning for review generation
Retrieval algorithm and style transfer augmentation
Jailbreaker-agent-guardian collaborative optimization framework
🔎 Similar Papers
No similar papers found.