🤖 AI Summary
Audio-language models (ALMs) are vulnerable to novel jailbreaking attacks, and existing defenses struggle to balance security and functionality. To address this, we propose ALMGuard—the first lightweight, ALM-specific defense framework. We identify and leverage a previously unrecognized security alignment shortcut mechanism prevalent across ALMs, and design Shortcut Activation Perturbations (SAPs) to deliberately trigger it. Coupled with Mel-Gradient Sparse Mask (M-GSM), ALMGuard precisely locates sparse, attack-sensitive yet semantically non-critical frequency bands in the Mel-spectrogram and applies minimal perturbations to activate robust safety responses. Extensive experiments on four state-of-the-art ALMs demonstrate that ALMGuard reduces the average success rate of advanced jailbreaking attacks to 4.6%, while inducing negligible degradation (<0.5% relative) in benign task performance—significantly outperforming existing defenses.
📝 Abstract
Recent advances in Audio-Language Models (ALMs) have significantly improved multimodal understanding capabilities. However, the introduction of the audio modality also brings new and unique vulnerability vectors. Previous studies have proposed jailbreak attacks that specifically target ALMs, revealing that defenses directly transferred from traditional audio adversarial attacks or text-based Large Language Model (LLM) jailbreaks are largely ineffective against these ALM-specific threats. To address this issue, we propose ALMGuard, the first defense framework tailored to ALMs. Based on the assumption that safety-aligned shortcuts naturally exist in ALMs, we design a method to identify universal Shortcut Activation Perturbations (SAPs) that serve as triggers that activate the safety shortcuts to safeguard ALMs at inference time. To better sift out effective triggers while preserving the model's utility on benign tasks, we further propose Mel-Gradient Sparse Mask (M-GSM), which restricts perturbations to Mel-frequency bins that are sensitive to jailbreaks but insensitive to speech understanding. Both theoretical analyses and empirical results demonstrate the robustness of our method against both seen and unseen attacks. Overall, MethodName reduces the average success rate of advanced ALM-specific jailbreak attacks to 4.6% across four models, while maintaining comparable utility on benign benchmarks, establishing it as the new state of the art. Our code and data are available at https://github.com/WeifeiJin/ALMGuard.