EASE: Practical and Efficient Safety Alignment for Small Language Models

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small language models (SLMs) deployed on edge devices face three key challenges: difficulty in achieving secure alignment under resource constraints, vulnerability of shallow refusal mechanisms to jailbreak attacks, and prohibitively high computational overhead when applying deep safety inference continuously. Method: We propose a selective safety inference framework that dynamically activates deep safety reasoning only upon detecting high-risk jailbreak queries—enabled by a lightweight query risk classifier and adaptive trigger mechanism—and employs knowledge distillation from an optimal safety teacher model to generate fine-grained, low-overhead safety responses. Contribution/Results: This is the first approach to achieve practical, deep safety alignment for SLMs in resource-constrained edge environments. It reduces jailbreak success rates by up to 17% and cuts inference overhead by 90% compared to always-on deep safety inference, thereby significantly improving the joint optimization of security and efficiency.

Technology Category

Application Category

📝 Abstract
Small language models (SLMs) are increasingly deployed on edge devices, making their safety alignment crucial yet challenging. Current shallow alignment methods that rely on direct refusal of malicious queries fail to provide robust protection, particularly against adversarial jailbreaks. While deliberative safety reasoning alignment offers deeper alignment for defending against sophisticated attacks, effectively implanting such reasoning capability in SLMs with limited capabilities remains an open challenge. Moreover, safety reasoning incurs significant computational overhead as models apply reasoning to nearly all queries, making it impractical for resource-constrained edge deployment scenarios that demand rapid responses. We propose EASE, a novel framework that enables practical and Efficient safety Alignment for Small languagE models. Our approach first identifies the optimal safety reasoning teacher that can effectively distill safety reasoning capabilities to SLMs. We then align models to selectively activate safety reasoning for dangerous adversarial jailbreak queries while providing direct responses to straightforward malicious queries and general helpful tasks. This selective mechanism enables small models to maintain robust safety guarantees against sophisticated attacks while preserving computational efficiency for benign interactions. Experimental results demonstrate that EASE reduces jailbreak attack success rates by up to 17% compared to shallow alignment methods while reducing inference overhead by up to 90% compared to deliberative safety reasoning alignment, making it practical for SLMs real-world edge deployments.
Problem

Research questions and friction points this paper is trying to address.

Small language models lack robust safety against adversarial attacks
Current safety methods incur high computational overhead on edge devices
Existing approaches fail to balance safety and efficiency effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selectively activates safety reasoning for dangerous queries
Distills safety reasoning capabilities from optimal teacher models
Reduces computational overhead while maintaining robust protection
🔎 Similar Papers
No similar papers found.