SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the efficiency–accuracy trade-off in deploying safety guard models for large language models (LLMs), this paper proposes a dynamic routing mechanism. It introduces a lightweight binary router that automatically classifies input prompts as “easy” or “hard” based on confidence and uncertainty estimates; only “hard” instances trigger the heavyweight safety guard, while “easy” ones are processed by a distilled lightweight model. Our key contribution is the first model-level, fine-grained adaptive scheduling architecture, integrating multi-stage safety assessment and collaborative inference—overcoming limitations of conventional static distillation or uniform heavyweight guarding. Evaluated across multiple safety benchmarks, our approach reduces average latency by 42% compared to full heavyweight guarding, maintains >99.3% harmful content detection accuracy, and achieves significantly higher interception accuracy per unit computational cost.

Technology Category

Application Category

📝 Abstract
Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often underperform on"hard"examples where the larger model provides accurate predictions. We observe that many inputs can be reliably handled by the smaller model, while only a small fraction require the larger model's capacity. Motivated by this, we propose SafeRoute, a binary router that distinguishes hard examples from easy ones. Our method selectively applies the larger safety guard model to the data that the router considers hard, improving efficiency while maintaining accuracy compared to solely using the larger safety guard model. Experimental results on multiple benchmark datasets demonstrate that our adaptive model selection significantly enhances the trade-off between computational cost and safety performance, outperforming relevant baselines.
Problem

Research questions and friction points this paper is trying to address.

Efficient safety guardrails for LLMs
Selective use of large guard models
Balancing computational cost and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive model selection
Binary router efficiency
Selective large model application