🤖 AI Summary
This study introduces the first practical speech jailbreaking attack against end-to-end large audio-language models (LALMs), addressing two key challenges: the failure of conventional text-based jailbreaking when transferred to the speech modality, and the lack of control over user-provided prompts. Methodologically, we propose a novel unified framework integrating four critical dimensions—*asynchrony*, *universality*, *stealthiness*, and *over-the-air robustness*—achieved via suffix-based audio perturbation design, multi-prompt joint optimization, intent-hiding strategies, and room impulse response modeling for end-to-end adversarial audio generation. Extensive evaluation on the largest publicly available LALM benchmark demonstrates high attack success rates; crucially, perturbations remain effective after playback through real-world speakers. To foster reproducibility and further research, we release all code and adversarial audio samples.
📝 Abstract
Jailbreak attacks to Large audio-language models (LALMs) are studied recently, but they achieve suboptimal effectiveness, applicability, and practicability, particularly, assuming that the adversary can fully manipulate user prompts. In this work, we first conduct an extensive experiment showing that advanced text jailbreak attacks cannot be easily ported to end-to-end LALMs via text-to speech (TTS) techniques. We then propose AudioJailbreak, a novel audio jailbreak attack, featuring (1) asynchrony: the jailbreak audio does not need to align with user prompts in the time axis by crafting suffixal jailbreak audios; (2) universality: a single jailbreak perturbation is effective for different prompts by incorporating multiple prompts into perturbation generation; (3) stealthiness: the malicious intent of jailbreak audios will not raise the awareness of victims by proposing various intent concealment strategies; and (4) over-the-air robustness: the jailbreak audios remain effective when being played over the air by incorporating the reverberation distortion effect with room impulse response into the generation of the perturbations. In contrast, all prior audio jailbreak attacks cannot offer asynchrony, universality, stealthiness, or over-the-air robustness. Moreover, AudioJailbreak is also applicable to the adversary who cannot fully manipulate user prompts, thus has a much broader attack scenario. Extensive experiments with thus far the most LALMs demonstrate the high effectiveness of AudioJailbreak. We highlight that our work peeks into the security implications of audio jailbreak attacks against LALMs, and realistically fosters improving their security robustness. The implementation and audio samples are available at our website https://audiojailbreak.github.io/AudioJailbreak.