Secure Retrieval-Augmented Generation against Poisoning Attacks

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security vulnerability of retrieval-augmented generation (RAG) systems to data poisoning attacks on their knowledge bases. We propose RAGuard, a lightweight, fine-tuning-free defense framework. Its core innovation lies in (1) expanding the retrieval scope to dilute the impact of poisoned content, and (2) introducing a multi-stage, non-parametric detection mechanism that jointly leverages chunk-level perplexity anomaly analysis and high-similarity text identification for precise filtering of maliciously injected content. RAGuard requires no model retraining, incurs minimal deployment overhead, and demonstrates robustness against strong adaptive poisoning attacks. Extensive evaluations on large-scale benchmarks show that RAGuard significantly enhances both the security and output reliability of RAG systems, achieving higher detection accuracy than existing baselines. It establishes a scalable, plug-and-play paradigm for securing RAG deployments.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have transformed natural language processing (NLP), enabling applications from content generation to decision support. Retrieval-Augmented Generation (RAG) improves LLMs by incorporating external knowledge but also introduces security risks, particularly from data poisoning, where the attacker injects poisoned texts into the knowledge database to manipulate system outputs. While various defenses have been proposed, they often struggle against advanced attacks. To address this, we introduce RAGuard, a detection framework designed to identify poisoned texts. RAGuard first expands the retrieval scope to increase the proportion of clean texts, reducing the likelihood of retrieving poisoned content. It then applies chunk-wise perplexity filtering to detect abnormal variations and text similarity filtering to flag highly similar texts. This non-parametric approach enhances RAG security, and experiments on large-scale datasets demonstrate its effectiveness in detecting and mitigating poisoning attacks, including strong adaptive attacks.
Problem

Research questions and friction points this paper is trying to address.

Securing Retrieval-Augmented Generation against data poisoning attacks
Detecting poisoned texts in knowledge databases using filtering techniques
Mitigating manipulation of system outputs through enhanced retrieval security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expands retrieval scope to reduce poisoned content likelihood
Applies chunk-wise perplexity filtering for anomaly detection
Uses text similarity filtering to flag suspicious duplicates
🔎 Similar Papers
No similar papers found.