When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Audio-language models exhibit a novel security vulnerability: ostensibly benign speech inputs can be maliciously manipulated to bypass safety mechanisms and generate harmful outputs. This paper introduces WhisperInject—the first two-stage adversarial attack framework tailored to the audio modality, integrating reward-guided optimization with payload injection. It employs reinforcement learning–enhanced projected gradient descent (RL-PGD) to inject imperceptible adversarial perturbations while preserving auditory fidelity. WhisperInject overcomes theoretical and practical limitations of prior audio adversarial attacks, achieving high stealth and strong cross-model transferability. It attains attack success rates exceeding 86% on state-of-the-art multimodal models—including Qwen2.5-Omni and Phi-4-Multimodal—validated rigorously via StrongREJECT, LlamaGuard, and human evaluation. To our knowledge, this is the first empirically demonstrated, practically feasible AI safety threat targeting real-world audio interfaces.

Technology Category

Application Category

📝 Abstract
As large language models become increasingly integrated into daily life, audio has emerged as a key interface for human-AI interaction. However, this convenience also introduces new vulnerabilities, making audio a potential attack surface for adversaries. Our research introduces WhisperInject, a two-stage adversarial audio attack framework that can manipulate state-of-the-art audio language models to generate harmful content. Our method uses imperceptible perturbations in audio inputs that remain benign to human listeners. The first stage uses a novel reward-based optimization method, Reinforcement Learning with Projected Gradient Descent (RL-PGD), to guide the target model to circumvent its own safety protocols and generate harmful native responses. This native harmful response then serves as the target for Stage 2, Payload Injection, where we use Projected Gradient Descent (PGD) to optimize subtle perturbations that are embedded into benign audio carriers, such as weather queries or greeting messages. Validated under the rigorous StrongREJECT, LlamaGuard, as well as Human Evaluation safety evaluation framework, our experiments demonstrate a success rate exceeding 86% across Qwen2.5-Omni-3B, Qwen2.5-Omni-7B, and Phi-4-Multimodal. Our work demonstrates a new class of practical, audio-native threats, moving beyond theoretical exploits to reveal a feasible and covert method for manipulating AI behavior.
Problem

Research questions and friction points this paper is trying to address.

Exploiting audio inputs to bypass AI safety protocols
Generating harmful content via adversarial audio perturbations
Manipulating audio-language models with imperceptible malicious inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage adversarial audio attack framework
Reward-based optimization with RL-PGD
Payload Injection using PGD optimization
🔎 Similar Papers
No similar papers found.