🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods lack inductive bias for modeling multi-scale temporal artifacts inherent in synthetic speech, limiting detection performance. To address this, we propose the Multi-Scale Convolutional Adapter (MSC-Adapter), which embeds parallel multi-scale convolutional modules within the encoder of a frozen self-supervised learning (SSL) backbone. This design explicitly injects local temporal awareness as structural prior, enabling joint modeling of both short-term artifacts and long-term distortions. The adapter introduces only 3.17 million trainable parameters—approximately 1% of the full model—significantly reducing computational overhead. Evaluated on five public benchmarks, MSC-Adapter consistently outperforms full fine-tuning and state-of-the-art PEFT approaches, demonstrating that carefully designed architectural priors are critical for robust deepfake speech detection.
📝 Abstract
Recent synthetic speech detection models typically adapt a pre-trained SSL model via finetuning, which is computationally demanding. Parameter-Efficient Fine-Tuning (PEFT) offers an alternative. However, existing methods lack the specific inductive biases required to model the multi-scale temporal artifacts characteristic of spoofed audio. This paper introduces the Multi-Scale Convolutional Adapter (MultiConvAdapter), a parameter-efficient architecture designed to address this limitation. MultiConvAdapter integrates parallel convolutional modules within the SSL encoder, facilitating the simultaneous learning of discriminative features across multiple temporal resolutions, capturing both short-term artifacts and long-term distortions. With only $3.17$M trainable parameters ($1%$ of the SSL backbone), MultiConvAdapter substantially reduces the computational burden of adaptation. Evaluations on five public datasets, demonstrate that MultiConvAdapter achieves superior performance compared to full fine-tuning and established PEFT methods.