RerouteGuard: Understanding and Mitigating Adversarial Risks for LLM Routing

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language model (LLM) routing systems to adversarial rerouting attacks, which can inflate computational costs, degrade response quality, or bypass safety mechanisms. The study presents the first threat taxonomy for LLM rerouting attacks, revealing that adversaries manipulate routing decisions by injecting confounding prefixes that shift decision boundaries. To counter this, the authors propose RerouteGuard, a defense framework integrating dynamic embedding inspection, adaptive thresholding, and interpretability analysis to effectively detect and filter adversarial routing prompts. Extensive experiments across three attack scenarios and four benchmarks demonstrate that RerouteGuard achieves over 99% detection accuracy while imposing negligible impact on legitimate queries, exhibiting strong generalization and scalability.

Technology Category

Application Category

📝 Abstract
Recent advancements in multi-model AI systems have leveraged LLM routers to reduce computational cost while maintaining response quality by assigning queries to the most appropriate model. However, as classifiers, LLM routers are vulnerable to novel adversarial attacks in the form of LLM rerouting, where adversaries prepend specially crafted triggers to user queries to manipulate routing decisions. Such attacks can lead to increased computational cost, degraded response quality, and even bypass safety guardrails, yet their security implications remain largely underexplored. In this work, we bridge this gap by systematizing LLM rerouting threats based on the adversary's objectives (i.e., cost escalation, quality hijacking, and safety bypass) and knowledge. Based on the threat taxonomy, we conduct a measurement study of real-world LLM routing systems against existing LLM rerouting attacks. The results reveal that existing routing systems are vulnerable to rerouting attacks, especially in the cost escalation scenario. We then characterize existing rerouting attacks using interpretability techniques, revealing that they exploit router decision boundaries through confounder gadgets that prepend queries to force misrouting. To mitigate these risks, we introduce RerouteGuard, a flexible and scalable guardrail framework for LLM rerouting. RerouteGuard filters adversarial rerouting prompts via dynamic embedding-based detection and adaptive thresholding. Extensive evaluations in three attack settings and four benchmarks demonstrate that RerouteGuard achieves over 99% detection accuracy against state-of-the-art rerouting attacks, while maintaining negligible impact on legitimate queries. The experimental results indicate that RerouteGuard offers a principled and practical solution for safeguarding multi-model AI systems against adversarial rerouting.
Problem

Research questions and friction points this paper is trying to address.

LLM routing
adversarial attacks
rerouting
security risks
multi-model AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM routing
adversarial rerouting
RerouteGuard
confounder gadgets
dynamic embedding detection
🔎 Similar Papers
No similar papers found.
Wenhui Zhang
Wenhui Zhang
Researcher/Software Engineer
Infrastructure and System
H
Huiyu Xu
The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China
Zhibo Wang
Zhibo Wang
Professor at College of Computer Science and Technology, Zhejiang University
Internet of ThingsAI SecurityData Security and Privacy
Z
Zhichao Li
The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China
Z
Zeqing He
The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China
X
Xuelin Wei
The School of Cyber Science and Engineering, Southeast University, P. R. China
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security