🤖 AI Summary
This work reveals a significantly heightened audio jailbreaking security risk for Large Audio-Language Models (LALMs) in multilingual and multi-accent settings. To address this, we propose Multi-AudioJail—the first systematic multilingual audio jailbreaking framework—integrating adversarial multilingual/multi-accent speech prompt generation, acoustic perturbation modeling (reverberation, whispering, echo), cross-linguistic phonological analysis, and a hierarchical evaluation pipeline. We empirically identify and quantify, for the first time, the synergistic amplification of attack success rates by accent variation and acoustic perturbations, demonstrating that LALMs exhibit systemic vulnerability to non-English speech modalities. Experiments show up to a 57.25-percentage-point increase in jailbreaking success rate, with multilingual audio attacks achieving 3.1× higher success than text-based counterparts. We also release the first open multilingual, multi-accent adversarial audio jailbreaking dataset.
📝 Abstract
Large Audio Language Models (LALMs) have significantly advanced audio understanding but introduce critical security risks, particularly through audio jailbreaks. While prior work has focused on English-centric attacks, we expose a far more severe vulnerability: adversarial multilingual and multi-accent audio jailbreaks, where linguistic and acoustic variations dramatically amplify attack success. In this paper, we introduce Multi-AudioJail, the first systematic framework to exploit these vulnerabilities through (1) a novel dataset of adversarially perturbed multilingual/multi-accent audio jailbreaking prompts, and (2) a hierarchical evaluation pipeline revealing that how acoustic perturbations (e.g., reverberation, echo, and whisper effects) interacts with cross-lingual phonetics to cause jailbreak success rates (JSRs) to surge by up to +57.25 percentage points (e.g., reverberated Kenyan-accented attack on MERaLiON). Crucially, our work further reveals that multimodal LLMs are inherently more vulnerable than unimodal systems: attackers need only exploit the weakest link (e.g., non-English audio inputs) to compromise the entire model, which we empirically show by multilingual audio-only attacks achieving 3.1x higher success rates than text-only attacks. We plan to release our dataset to spur research into cross-modal defenses, urging the community to address this expanding attack surface in multimodality as LALMs evolve.