PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes PIDP-Attack, a novel composite attack that integrates prompt injection with database poisoning to effectively manipulate large language model (LLM) outputs without prior knowledge of user queries. Unlike existing data poisoning attacks on retrieval-augmented generation (RAG) systems—which rely on query priors and thus suffer from limited practicality—PIDP-Attack operates during inference by injecting malicious tokens into user queries and contaminating only a small number of retrieved documents. Evaluated on benchmark datasets including Natural Questions, HotpotQA, and MS-MARCO, the method demonstrates strong transferability across eight mainstream LLMs. In open-domain question answering, it achieves a 4%–16% higher attack success rate compared to PoisonedRAG while preserving high retrieval accuracy, significantly enhancing both the generality and real-world applicability of RAG-targeted attacks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is often hindered by issues such as outdated knowledge and the tendency to generate hallucinations. To address these limitations, Retrieval-Augmented Generation (RAG) systems have been introduced, enhancing LLMs with external, up-to-date knowledge sources. Despite their advantages, RAG systems remain vulnerable to adversarial attacks, with data poisoning emerging as a prominent threat. Existing poisoning-based attacks typically require prior knowledge of the user's specific queries, limiting their flexibility and real-world applicability. In this work, we propose PIDP-Attack, a novel compound attack that integrates prompt injection with database poisoning in RAG. By appending malicious characters to queries at inference time and injecting a limited number of poisoned passages into the retrieval database, our method can effectively manipulate LLM response to arbitrary query without prior knowledge of the user's actual query. Experimental evaluations across three benchmark datasets (Natural Questions, HotpotQA, MS-MARCO) and eight LLMs demonstrate that PIDP-Attack consistently outperforms the original PoisonedRAG. Specifically, our method improves attack success rates by 4% to 16% on open-domain QA tasks while maintaining high retrieval precision, proving that the compound attack strategy is both necessary and highly effective.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
adversarial attacks
data poisoning
prompt injection
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Injection
Database Poisoning
Retrieval-Augmented Generation
Adversarial Attack
LLM Security
🔎 Similar Papers
No similar papers found.
H
Haozhen Wang
The Chinese University of Hong Kong, Shenzhen
Haoyue Liu
Haoyue Liu
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Computer VisionEvent Camera
J
Jionghao Zhu
The Chinese University of Hong Kong, Shenzhen
Z
Zhichao Wang
The Chinese University of Hong Kong, Shenzhen
Yongxin Guo
Yongxin Guo
Alibaba Group
Video UnderstandingMLLM
X
Xiaoying Tang
The Chinese University of Hong Kong, Shenzhen