Backdoor Attacks Against Speech Language Models

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic study of audio backdoor attacks against speech-language models (SLMs), focusing on cascaded architectures comprising speech encoders and large language models (LLMs). To address the unclear propagation mechanisms of backdoors across modular components and the lack of effective defenses, we propose a component-level vulnerability analysis framework that identifies speech encoders as the primary attack surface. We further design a lightweight, fine-tuning-based defense to mitigate poisoning risks in pre-trained encoders. Extensive end-to-end evaluations are conducted across four mainstream speech encoders, three benchmark datasets, and four downstream tasks—including automatic speech recognition, sentiment, gender, and age prediction—achieving attack success rates of 90.76%–99.41%. Our defense significantly reduces backdoor activation rates, empirically validating both the traceability of cross-component backdoor propagation and the efficacy of our mitigation strategy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) and their multimodal extensions are becoming increasingly popular. One common approach to enable multimodality is to cascade domain-specific encoders with an LLM, making the resulting model inherit vulnerabilities from all of its components. In this work, we present the first systematic study of audio backdoor attacks against speech language models. We demonstrate its effectiveness across four speech encoders and three datasets, covering four tasks: automatic speech recognition (ASR), speech emotion recognition, and gender and age prediction. The attack consistently achieves high success rates, ranging from 90.76% to 99.41%. To better understand how backdoors propagate, we conduct a component-wise analysis to identify the most vulnerable stages of the pipeline. Finally, we propose a fine-tuning-based defense that mitigates the threat of poisoned pretrained encoders.
Problem

Research questions and friction points this paper is trying to address.

Studying audio backdoor attacks on speech language models
Analyzing vulnerability propagation across multimodal pipeline components
Proposing fine-tuning defense against poisoned speech encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically study audio backdoor attacks on speech models
Analyze vulnerable pipeline stages for backdoor propagation
Propose fine-tuning defense to mitigate poisoned encoders
🔎 Similar Papers
No similar papers found.