SEA-Spoof: Bridging The Gap in Multilingual Audio Deepfake Detection for South-East Asian

๐Ÿ“… 2025-09-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Southeast Asian (SEA) languages suffer from severe data scarcity and performance degradation in audio deepfake detection (ADD). To address this, we introduce SEA-Spoofโ€”the first large-scale, multilingual ADD benchmark covering six SEA languages, comprising over 300 hours of authentic and spoofed speech pairs. Crucially, we systematically generate spoof samples using cross-source, high-diversity text-to-speech (TTS) and voice conversion (VC) systems, ensuring broad coverage of synthesis artifacts. We further propose a joint cross-lingual modeling and fine-tuning framework to enhance generalization to unseen languages and spoofing methods. Experiments reveal that state-of-the-art ADD models suffer >25% average accuracy drop on SEA languages; in contrast, fine-tuning on SEA-Spoof boosts per-language detection accuracy by 12โ€“38 percentage points. This demonstrates SEA-Spoofโ€™s critical role in bridging the multilingual ADD gap and validates our methodological advances in robust, generalizable deepfake detection.

Technology Category

Application Category

๐Ÿ“ Abstract
The rapid growth of the digital economy in South-East Asia (SEA) has amplified the risks of audio deepfakes, yet current datasets cover SEA languages only sparsely, leaving models poorly equipped to handle this critical region. This omission is critical: detection models trained on high-resource languages collapse when applied to SEA, due to mismatches in synthesis quality, language-specific characteristics, and data scarcity. To close this gap, we present SEA-Spoof, the first large-scale Audio Deepfake Detection (ADD) dataset especially for SEA languages. SEA-Spoof spans 300+ hours of paired real and spoof speech across Tamil, Hindi, Thai, Indonesian, Malay, and Vietnamese. Spoof samples are generated from a diverse mix of state-of-the-art open-source and commercial systems, capturing wide variability in style and fidelity. Benchmarking state-of-the-art detection models reveals severe cross-lingual degradation, but fine-tuning on SEA-Spoof dramatically restores performance across languages and synthesis sources. These results highlight the urgent need for SEA-focused research and establish SEA-Spoof as a foundation for developing robust, cross-lingual, and fraud-resilient detection systems.
Problem

Research questions and friction points this paper is trying to address.

Detecting audio deepfakes in under-resourced South-East Asian languages
Addressing performance collapse of models trained on high-resource languages
Creating a large-scale multilingual dataset to bridge the detection gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created first large-scale multilingual audio deepfake dataset
Generated spoof samples using diverse synthesis systems
Fine-tuned detection models to restore cross-lingual performance
๐Ÿ”Ž Similar Papers
No similar papers found.