Disentangled Safety Adapters Enable Efficient Guardrails and Flexible Inference-Time Alignment

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI safety methods—such as guardrail models and alignment training—often incur significant trade-offs in inference efficiency or development flexibility. To address this, we propose Decoupled Safety Adapters (DSA), the first framework to decouple safety computation from the base model, enabling plug-and-play, dynamically adjustable safety enhancement at inference time. DSA achieves this via lightweight adapters, internal representation reuse, and multi-task joint optimization, supporting context-aware, fine-grained safety–performance trade-offs. Experiments demonstrate state-of-the-art performance: 0.88 AUC (+0.27 improvement) on Summedits for hallucination detection; 0.98 F1 on ToxiGen for hate speech identification; 0.93 AUC on AEGIS2.0 and BeaverTails for unsafe content detection; a 93% relative improvement in StrongReject safety score; 98% retention of MTBench performance; and an 8-percentage-point reduction in alignment tax.

Technology Category

Application Category

📝 Abstract
Existing paradigms for ensuring AI safety, such as guardrail models and alignment training, often compromise either inference efficiency or development flexibility. We introduce Disentangled Safety Adapters (DSA), a novel framework addressing these challenges by decoupling safety-specific computations from a task-optimized base model. DSA utilizes lightweight adapters that leverage the base model's internal representations, enabling diverse and flexible safety functionalities with minimal impact on inference cost. Empirically, DSA-based safety guardrails substantially outperform comparably sized standalone models, notably improving hallucination detection (0.88 vs. 0.61 AUC on Summedits) and also excelling at classifying hate speech (0.98 vs. 0.92 on ToxiGen) and unsafe model inputs and responses (0.93 vs. 0.90 on AEGIS2.0&BeaverTails). Furthermore, DSA-based safety alignment allows dynamic, inference-time adjustment of alignment strength and a fine-grained trade-off between instruction following performance and model safety. Importantly, combining the DSA safety guardrail with DSA safety alignment facilitates context-dependent alignment strength, boosting safety on StrongReject by 93% while maintaining 98% performance on MTBench -- a total reduction in alignment tax of 8 percentage points compared to standard safety alignment fine-tuning. Overall, DSA presents a promising path towards more modular, efficient, and adaptable AI safety and alignment.
Problem

Research questions and friction points this paper is trying to address.

Balancing AI safety and inference efficiency
Decoupling safety computations from base models
Enabling dynamic adjustment of alignment strength
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling safety computations from base model
Lightweight adapters for minimal inference cost
Dynamic adjustment of alignment strength
🔎 Similar Papers
No similar papers found.