On Optimizing Multimodal Jailbreaks for Spoken Language Models

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing jailbreak attacks on speech language models (SLMs) are largely confined to single modalities, failing to expose security vulnerabilities under joint audio-text inputs. This work proposes JAMA, the first framework enabling coordinated jailbreak attacks across both textual and audio modalities. JAMA employs Greedy Coordinate Gradient (GCG) for textual optimization and Projected Gradient Descent (PGD) for audio perturbation, augmented with joint and accelerated sequential approximation strategies. Experiments across four mainstream SLMs and four audio categories demonstrate that JAMA improves jailbreak success rates by 1.5–10Γ— and accelerates attack execution by 4–6Γ— compared to single-modality baselines. These results underscore that relying solely on unimodal evaluations is insufficient for ensuring the robustness and safety of SLMs.

Technology Category

Application Category

πŸ“ Abstract
As Spoken Language Models (SLMs) integrate speech and text modalities, they inherit the safety vulnerabilities of their LLM backbone and an expanded attack surface. SLMs have been previously shown to be susceptible to jailbreaking, where adversarial prompts induce harmful responses. Yet existing attacks largely remain unimodal, optimizing either text or audio in isolation. We explore gradient-based multimodal jailbreaks by introducing JAMA (Joint Audio-text Multimodal Attack), a joint multimodal optimization framework combining Greedy Coordinate Gradient (GCG) for text and Projected Gradient Descent (PGD) for audio, to simultaneously perturb both modalities. Evaluations across four state-of-the-art SLMs and four audio types demonstrate that JAMA surpasses unimodal jailbreak rate by 1.5x to 10x. We analyze the operational dynamics of this joint attack and show that a sequential approximation method makes it 4x to 6x faster. Our findings suggest that unimodal safety is insufficient for robust SLMs. The code and data are available at https://repos.lsv.uni-saarland.de/akrishnan/multimodal-jailbreak-slm
Problem

Research questions and friction points this paper is trying to address.

multimodal jailbreak
spoken language models
adversarial attack
audio-text vulnerability
safety evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal jailbreak
spoken language models
joint optimization
adversarial attack
gradient-based attack
πŸ”Ž Similar Papers
No similar papers found.