🤖 AI Summary
This work addresses a novel audio injection attack against speech-driven large language models (LLMs), wherein adversaries exploit inaudible signals to perform stealthy prompt injection. The authors propose SWhisper, a framework that, for the first time in realistic black-box settings using off-the-shelf hardware, establishes a near-ultrasonic covert channel. By modeling microphone nonlinearities and applying lightweight channel inversion with pre-compensation, SWhisper enables reliable transmission of high-fidelity, long-structured prompts. Furthermore, the framework incorporates a speech-aware jailbreak prompt generation strategy. Experimental results demonstrate that the method achieves a 0.94 non-rejection rate and a 0.925 targeted persuasiveness on mainstream speech-driven LLMs, while remaining completely imperceptible to human listeners.
📝 Abstract
Speech-driven large language models (LLMs) are increasingly accessed through speech interfaces, introducing new security risks via open acoustic channels. We present Sirens' Whisper (SWhisper), the first practical framework for covert prompt-based attacks against speech-driven LLMs under realistic black-box conditions using commodity hardware. SWhisper enables robust, inaudible delivery of arbitrary target baseband audio-including long and structured prompts-on commodity devices by encoding it into near-ultrasound waveforms that demodulate faithfully after acoustic transmission and microphone nonlinearity. This is achieved through a simple yet effective approach to modeling nonlinear channel characteristics across devices and environments, combined with lightweight channel-inversion pre-compensation. Building on this high-fidelity covert channel, we design a voice-aware jailbreak generation method that ensures intelligibility, brevity, and transferability under speech-driven interfaces. Experiments across both commercial and open-source speech-driven LLMs demonstrate strong black-box effectiveness. On commercial models, SWhisper achieves up to 0.94 non-refusal (NR) and 0.925 specific-convincing (SC). A controlled user study further shows that the injected jailbreak audio is perceptually indistinguishable from background-only playback for human listeners. Although jailbreaks serve as a case study, the underlying covert acoustic channel enables a broader class of high-fidelity prompt-injection and commandexecution attacks.