When Safety Becomes a Vulnerability: Exploiting LLM Alignment Homogeneity for Transferable Blocking in RAG

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes TabooRAG, a novel framework that exposes the homogeneity of safety alignment mechanisms in modern large language models (LLMs) as a previously unrecognized transferable attack surface. In retrieval-augmented generation (RAG) systems, this homogeneity enables transferable denial-of-service attacks that cause legitimate user queries to be erroneously rejected. Without requiring access to the target model—operating entirely in a black-box setting—TabooRAG leverages a proxy model to generate highly transferable blocking documents. By integrating a query-aware policy library with risk-context transfer techniques, the method achieves up to 96% blocking success across seven mainstream LLMs and three benchmark datasets, substantially outperforming existing approaches and demonstrating strong cross-model transferability and effectiveness.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance on potentially poisonable knowledge bases introduces new availability risks. Attackers can inject documents that cause LLMs to refuse benign queries, attacks known as blocking attacks. Prior blocking attacks relying on adversarial suffixes or explicit instruction injection are increasingly ineffective against modern safety-aligned LLMs. We observe that safety-aligned LLMs exhibit heightened sensitivity to query-relevant risk signals, causing alignment mechanisms designed for harm prevention to become a source of exploitable refusal. Moreover, mainstream alignment practices share overlapping risk categories and refusal criteria, a phenomenon we term alignment homogeneity, enabling restricted risk context constructed on an accessible LLM to transfer across LLMs. Based on this insight, we propose TabooRAG, a transferable blocking attack framework operating under a strict black-box setting. An attacker can generate a single retrievable blocking document per query by optimizing against a surrogate LLM in an accessible RAG environment, and directly transfer it to an unknown target RAG system without access to the target model. We further introduce a query-aware strategy library to reuse previously effective strategies and improve optimization efficiency. Experiments across 7 modern LLMs and 3 datasets demonstrate that TabooRAG achieves stable cross-model transferability and state-of-the-art blocking success rates, reaching up to 96% on GPT-5.2. Our findings show that increasingly standardized safety alignment across modern LLMs creates a shared and transferable attack surface in RAG systems, revealing a need for improved defenses.
Problem

Research questions and friction points this paper is trying to address.

blocking attacks
alignment homogeneity
Retrieval-Augmented Generation
transferable attacks
safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

alignment homogeneity
transferable blocking attack
Retrieval-Augmented Generation (RAG)
safety alignment vulnerability
black-box attack
🔎 Similar Papers
J
Junchen Li
University of Electronic Science and Technology of China
C
Chao Qi
University of Electronic Science and Technology of China
R
Rongzheng Wang
University of Electronic Science and Technology of China
Qizhi Chen
Qizhi Chen
PhD Candidate of Zhejiang University
Multimodal ReasoningEmbodied AI3D Vision
L
Liang Xu
University of Electronic Science and Technology of China
Di Liang
Di Liang
University of Michigan
diode lasersSi photonicsphotonic integrated circuitsnanofabrication
B
Bob Simons
Tencent Hunyuan
Shuang Liang
Shuang Liang
Research Associated Professor, University of Electronic Science and Technology of China
Graph Neural NetworkKnowledge GraphData Mining