🤖 AI Summary
This work addresses the vulnerability of private retrieval data in Retrieval-Augmented Generation (RAG) systems to data extraction attacks. We propose a transferable, multi-model adversarial attack framework that optimizes universal adversarial strings continuously—bypassing hand-crafted prompts—to compel large language models (LLMs) to verbatim output sensitive retrieved content. Our key contributions are: (1) the first joint gradient-based optimization mechanism across heterogeneous LLM architectures; (2) a retrieval-content amplification strategy leveraging initial-token importance to enhance cross-model generalizability; and (3) a hybrid probing approach integrating white-box and gray-box analysis with token-importance weighting. Evaluated on multiple LLMs and RAG benchmarks, our method significantly outperforms hand-crafted prompting and single-model optimization baselines, achieving high attack success rates even against unseen models. The results expose critical vulnerabilities in RAG’s internal response generation mechanisms.
📝 Abstract
Retrieval-Augmented Generation (RAG) offers a solution to mitigate hallucinations in Large Language Models (LLMs) by grounding their outputs to knowledge retrieved from external sources. The use of private resources and data in constructing these external data stores can expose them to risks of extraction attacks, in which attackers attempt to steal data from these private databases. Existing RAG extraction attacks often rely on manually crafted prompts, which limit their effectiveness. In this paper, we introduce a framework called MARAGE for optimizing an adversarial string that, when appended to user queries submitted to a target RAG system, causes outputs containing the retrieved RAG data verbatim. MARAGE leverages a continuous optimization scheme that integrates gradients from multiple models with different architectures simultaneously to enhance the transferability of the optimized string to unseen models. Additionally, we propose a strategy that emphasizes the initial tokens in the target RAG data, further improving the attack's generalizability. Evaluations show that MARAGE consistently outperforms both manual and optimization-based baselines across multiple LLMs and RAG datasets, while maintaining robust transferability to previously unseen models. Moreover, we conduct probing tasks to shed light on the reasons why MARAGE is more effective compared to the baselines and to analyze the impact of our approach on the model's internal state.