🤖 AI Summary
Speech Language Models (SLMs) exhibit severe security vulnerabilities to covert audio jailbreaking attacks, achieving up to 100% success rates in certain scenarios—exposing significantly greater risks than their text-based counterparts. To address this, we propose a training-free, inference-time activation intervention patching method: the first post-hoc activation repair mechanism tailored for SLMs, integrating activation-space perturbation analysis with gradient-guided defense localization. We further introduce the first large-scale safety evaluation benchmark dedicated to SLMs and establish a comprehensive adversarial robustness assessment framework for the speech modality. Experiments demonstrate that our approach reduces jailbreaking success rates from 100% to ≤1%, incurs negligible original-task performance degradation (<0.3%), and exhibits strong generalization across diverse SLM architectures and speech datasets.
📝 Abstract
Speech Language Models (SLMs) enable natural interactions via spoken instructions, which more effectively capture user intent by detecting nuances in speech. The richer speech signal introduces new security risks compared to text-based models, as adversaries can better bypass safety mechanisms by injecting imperceptible noise to speech. We analyze adversarial attacks and find that SLMs are substantially more vulnerable to jailbreak attacks, which can achieve a perfect 100% attack success rate in some instances. To improve security, we propose post-hoc patching defenses used to intervene during inference by modifying the SLM's activations that improve robustness up to 99% with (i) negligible impact on utility and (ii) without any re-training. We conduct ablation studies to maximize the efficacy of our defenses and improve the utility/security trade-off, validated with large-scale benchmarks unique to SLMs.