A Parameter-Efficient Multi-Scale Convolutional Adapter for Synthetic Speech Detection

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods lack inductive bias for modeling multi-scale temporal artifacts inherent in synthetic speech, limiting detection performance. To address this, we propose the Multi-Scale Convolutional Adapter (MSC-Adapter), which embeds parallel multi-scale convolutional modules within the encoder of a frozen self-supervised learning (SSL) backbone. This design explicitly injects local temporal awareness as structural prior, enabling joint modeling of both short-term artifacts and long-term distortions. The adapter introduces only 3.17 million trainable parameters—approximately 1% of the full model—significantly reducing computational overhead. Evaluated on five public benchmarks, MSC-Adapter consistently outperforms full fine-tuning and state-of-the-art PEFT approaches, demonstrating that carefully designed architectural priors are critical for robust deepfake speech detection.

Technology Category

Application Category

📝 Abstract
Recent synthetic speech detection models typically adapt a pre-trained SSL model via finetuning, which is computationally demanding. Parameter-Efficient Fine-Tuning (PEFT) offers an alternative. However, existing methods lack the specific inductive biases required to model the multi-scale temporal artifacts characteristic of spoofed audio. This paper introduces the Multi-Scale Convolutional Adapter (MultiConvAdapter), a parameter-efficient architecture designed to address this limitation. MultiConvAdapter integrates parallel convolutional modules within the SSL encoder, facilitating the simultaneous learning of discriminative features across multiple temporal resolutions, capturing both short-term artifacts and long-term distortions. With only $3.17$M trainable parameters ($1%$ of the SSL backbone), MultiConvAdapter substantially reduces the computational burden of adaptation. Evaluations on five public datasets, demonstrate that MultiConvAdapter achieves superior performance compared to full fine-tuning and established PEFT methods.
Problem

Research questions and friction points this paper is trying to address.

Detecting synthetic speech with multi-scale temporal artifacts efficiently
Reducing computational burden of SSL model adaptation for spoof detection
Capturing both short-term artifacts and long-term distortions in audio
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient multi-scale convolutional adapter architecture
Integrates parallel convolutions for multi-temporal resolution learning
Achieves superior performance with minimal trainable parameters
🔎 Similar Papers
No similar papers found.