Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents

📅 2024-06-09
🏛️ arXiv.org
📈 Citations: 12
Influential: 2
📄 PDF
🤖 AI Summary
This work identifies a novel denial-of-service vulnerability in Retrieval-Augmented Generation (RAG) systems when deployed with untrusted document corpora: “blocking documents”—specially crafted inputs that, when injected into the corpus, cause the system to return empty or unsafe responses for targeted queries. To address this, we propose the first fully black-box blocking document generation method—requiring no access to model parameters, instruction templates, embedding models, or auxiliary LLMs, and operating solely via standard RAG input-output interfaces. Our approach combines retrieval interference modeling with black-box optimization. We validate its effectiveness across multiple LLMs (Llama3, GPT-3.5) and embedding models (BGE, Sentence-BERT). Experiments reveal that mainstream LLM safety evaluation metrics are entirely ineffective against such attacks. Furthermore, we introduce the first dedicated benchmark for evaluating RAG resilience to blocking attacks, establishing a new paradigm for robustness assessment in retrieval-augmented systems.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) systems respond to queries by retrieving relevant documents from a knowledge database and applying an LLM to the retrieved documents. We demonstrate that RAG systems that operate on databases with untrusted content are vulnerable to denial-of-service attacks we call jamming. An adversary can add a single ``blocker'' document to the database that will be retrieved in response to a specific query and result in the RAG system not answering this query - ostensibly because it lacks the relevant information or because the answer is unsafe. We describe and measure the efficacy of several methods for generating blocker documents, including a new method based on black-box optimization. This method (1) does not rely on instruction injection, (2) does not require the adversary to know the embedding or LLM used by the target RAG system, and (3) does not rely on an auxiliary LLM. We evaluate jamming attacks on several LLMs and embeddings and demonstrate that the existing safety metrics for LLMs do not capture their vulnerability to jamming. We then discuss defenses against blocker documents.
Problem

Research questions and friction points this paper is trying to address.

RAG Systems
Adversarial Documents
Retrieval Augmented Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box Optimization
Adversarial Documents
RAG System Attack