DFALLM: Achieving Generalizable Multitask Deepfake Detection by Optimizing Audio LLM Components

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional audio deepfake detection models suffer from poor generalization, while existing Audio Large Language Models (ALLMs) face bottlenecks in cross-domain detection and multi-task capabilities (e.g., forgery localization and attribute identification). Method: This paper proposes a novel ALLM architecture that jointly optimizes an audio encoder and a text-based large language model, incorporating feature alignment and task-adaptive mechanisms to enhance both low-level artifact perception and high-level semantic understanding of synthetic speech. Contribution/Results: To our knowledge, this is the first unified ALLM framework achieving state-of-the-art performance in both cross-domain detection and multi-task generalization. It achieves an average accuracy of 95.76% across ASVSpoof2019, In-The-Wild, and Demopage benchmarks—surpassing prior work. Moreover, it significantly improves forgery attribution and precise localization, overcoming longstanding limitations in generalizability and task extensibility.

Technology Category

Application Category

📝 Abstract
Audio deepfake detection has recently garnered public concern due to its implications for security and reliability. Traditional deep learning methods have been widely applied to this task but often lack generalisability when confronted with newly emerging spoofing techniques and more tasks such as spoof attribution recognition rather than simple binary classification. In principle, Large Language Models (LLMs) are considered to possess the needed generalisation capabilities. However, previous research on Audio LLMs (ALLMs) indicates a generalization bottleneck in audio deepfake detection performance, even when sufficient data is available. Consequently, this study investigates the model architecture and examines the effects of the primary components of ALLMs, namely the audio encoder and the text-based LLM. Our experiments demonstrate that the careful selection and combination of audio encoders and text-based LLMs are crucial for unlocking the deepfake detection potential of ALLMs. We further propose an ALLM structure capable of generalizing deepfake detection abilities to out-of-domain spoofing tests and other deepfake tasks, such as spoof positioning and spoof attribution recognition. Our proposed model architecture achieves state-of-the-art (SOTA) performance across multiple datasets, including ASVSpoof2019, InTheWild, and Demopage, with accuracy reaching up to 95.76% on average, and exhibits competitive capabilities in other deepfake detection tasks such as attribution, and localisation compared to SOTA audio understanding models. Data and codes are provided in supplementary materials.
Problem

Research questions and friction points this paper is trying to address.

Optimizes audio LLM components for generalizable deepfake detection
Addresses generalization bottleneck in audio LLM-based spoof detection
Enhances multitask detection including spoof attribution and localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizing audio encoder and text LLM components
Generalizing detection to out-of-domain spoofing tests
Achieving SOTA performance across multiple datasets
🔎 Similar Papers
No similar papers found.