🤖 AI Summary
This work addresses the limitations of existing adversarial attacks on large language model-based retrieval systems (LLMRs), which typically require access to known queries or the target model—constraints that hinder real-world deployment and lead to inadequate security evaluation. To overcome this, we propose the first query-agnostic black-box attack that operates without knowledge of either the target query or the model. Our method leverages a zero-shot proxy large language model to generate transferable adversarial tokens that manipulate retrieval results. Built upon a min-max optimization framework, it employs an adversarial learning mechanism that optimizes injectable content using learnable query samples, enabling effective cross-model transferability. Extensive experiments across multiple benchmark datasets and mainstream LLMRs demonstrate that our attack significantly disrupts ranking performance, revealing that even benign document edits can introduce severe robustness vulnerabilities, thereby highlighting its broad applicability and practical threat potential.
📝 Abstract
Large language models (LLMs) have been serving as effective backbones for retrieval systems, including Retrieval-Augmentation-Generation (RAG), Dense Information Retriever (IR), and Agent Memory Retrieval. Recent studies have demonstrated that such LLM-based Retrieval (LLMR) is vulnerable to adversarial attacks, which manipulates documents by token-level injections and enables adversaries to either boost or diminish these documents in retrieval tasks. However, existing attack studies mainly (1) presume a known query is given to the attacker, and (2) highly rely on access to the victim model's parameters or interactions, which are hardly accessible in real-world scenarios, leading to limited validity. To further explore the secure risks of LLMR, we propose a practical black-box attack method that generates transferable injection tokens based on zero-shot surrogate LLMs without need of victim queries or victim models knowledge. The effectiveness of our attack raises such a robustness issue that similar effects may arise from benign or unintended document edits in the real world. To achieve our attack, we first establish a theoretical framework of LLMR and empirically verify it. Under the framework, we simulate the transferable attack as a min-max problem, and propose an adversarial learning mechanism that finds optimal adversarial tokens with learnable query samples. Our attack is validated to be effective on benchmark datasets across popular LLM retrievers.