🤖 AI Summary
Large language models (LLMs) are vulnerable to prompt injection attacks, leading to jailbreaking or task hijacking—particularly in retrieval-augmented generation (RAG) systems. To address this, we propose the Highlight & Summarize (H&S) framework, which decouples RAG into two specialized modules: a large-model-based “Highlighter” that extracts salient passages from retrieved documents, and a “Summarizer” that generates responses *exclusively* from the highlighted content—without access to the user’s original query. This architectural separation inherently blocks adversarial injection pathways, establishing a natural jailbreak-resilient mechanism. Experiments demonstrate that H&S robustly mitigates diverse prompt injection attacks while preserving—or even improving—answer accuracy and coherence compared to standard RAG. Moreover, H&S is model-agnostic, supporting flexible combinations of LLMs for highlighting and summarization, thereby achieving both strong security guarantees and practical deployability.
📝 Abstract
Preventing jailbreaking and model hijacking of Large Language Models (LLMs) is an important yet challenging task. For example, when interacting with a chatbot, malicious users can input specially crafted prompts to cause the LLM to generate undesirable content or perform a completely different task from its intended purpose. Existing mitigations for such attacks typically rely on hardening the LLM's system prompt or using a content classifier trained to detect undesirable content or off-topic conversations. However, these probabilistic approaches are relatively easy to bypass due to the very large space of possible inputs and undesirable outputs. In this paper, we present and evaluate Highlight & Summarize (H&S), a new design pattern for retrieval-augmented generation (RAG) systems that prevents these attacks by design. The core idea is to perform the same task as a standard RAG pipeline (i.e., to provide natural language answers to questions, based on relevant sources) without ever revealing the user's question to the generative LLM. This is achieved by splitting the pipeline into two components: a highlighter, which takes the user's question and extracts relevant passages ("highlights") from the retrieved documents, and a summarizer, which takes the highlighted passages and summarizes them into a cohesive answer. We describe several possible instantiations of H&S and evaluate their generated responses in terms of correctness, relevance, and response quality. Surprisingly, when using an LLM-based highlighter, the majority of H&S responses are judged to be better than those of a standard RAG pipeline.