PhonemeDF: A Synthetic Speech Dataset for Audio Deepfake Detection and Naturalness Evaluation

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of phoneme-level aligned datasets for deepfake speech detection and naturalness evaluation by introducing PhonemeDF, a novel dataset that integrates authentic speech from LibriSpeech with synthetic utterances generated by four text-to-speech (TTS) and three voice conversion (VC) systems. Using the Montreal Forced Aligner, all samples are precisely aligned at the phoneme level. The work further proposes, for the first time, the use of Kullback-Leibler divergence (KLD) to quantify distributional discrepancies between real and synthetic speech at the phoneme level. Empirical analysis reveals a strong correlation between KLD and classifier-based detection performance, demonstrating that KLD serves as an effective indicator for identifying highly discriminative phonemes. This approach establishes a new paradigm for phoneme-level analysis of speech forgeries.

Technology Category

Application Category

📝 Abstract
The growing sophistication of speech generated by Artificial Intelligence (AI) has introduced new challenges in audio deepfake detection. Text-to-speech (TTS) and voice conversion (VC) technologies can create highly convincing synthetic speech with naturalness and intelligibility. This poses serious threats to voice biometric security and to systems designed to combat the spread of spoken misinformation, where synthetic voices may be used to disseminate false or malicious content. While interest in AI-generated speech has increased, resources for evaluating naturalness at the phoneme level remain limited. In this work, we address this gap by presenting the Phoneme-Level DeepFake dataset (PhonemeDF), comprising parallel real and synthetic speech segmented at the phoneme level. Real speech samples are derived from a subset of LibriSpeech, while synthetic samples are generated using four TTS and three VC systems. For each system, phoneme-aligned TextGrid files are obtained using the Montreal Forced Aligner (MFA). We compute the Kullback-Leibler divergence (KLD) between real and synthetic phoneme distributions to quantify fidelity and establish a ranking based on similarity to natural speech. Our findings show a clear correlation between the KLD of real and synthetic phoneme distributions and the performance of classifiers trained to distinguish them, suggesting that KLD can serve as an indicator of the most discriminative phonemes for deepfake detection.
Problem

Research questions and friction points this paper is trying to address.

audio deepfake detection
speech naturalness
phoneme-level evaluation
synthetic speech
voice biometrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Phoneme-level alignment
Audio deepfake detection
Kullback-Leibler divergence
Synthetic speech naturalness
PhonemeDF dataset
🔎 Similar Papers
No similar papers found.
V
Vamshi Nallaguntla
Wichita State University, Wichita, Kansas, USA
A
Aishwarya Fursule
Wichita State University, Wichita, Kansas, USA
Shruti Kshirsagar
Shruti Kshirsagar
Wichita State University
Deep LearningHealthcare & AISignal ProcessingEmotion RecognitionDeep Fake
A
Anderson R. Avila
Institut national de la recherche scientifique (INRS-EMT), Université du Québec, Canada; INRS-UQO Mixed Research Unit on Cybersecurity, Gatineau, Canada