🤖 AI Summary
This work addresses the scarcity of large-scale multilingual read-speech datasets with reliable speaker labels, which has hindered progress in tasks such as anti-spoofing and speaker verification. We present the first systematic resolution of speaker label heterogeneity in Mozilla Common Voice, introducing TidySpeech—a high-quality dataset comprising over 210,000 monolingual and 4,500 multilingual speakers. We further define standardized evaluation protocols for monolingual (Tidy-M) and multilingual (Tidy-X) scenarios. A ResNet-based model fine-tuned on Tidy-M achieves an equal error rate (EER) of 0.35% and demonstrates significantly improved generalization on the unseen conversational dataset CANDOR. Both the dataset and models are publicly released, establishing the first large-scale open benchmark for cross-lingual speaker verification.
📝 Abstract
The development of robust, multilingual speaker recognition systems is hindered by a lack of large-scale, publicly available and multilingual datasets, particularly for the read-speech style crucial for applications like anti-spoofing. To address this gap, we introduce the TidyVoice dataset derived from the Mozilla Common Voice corpus after mitigating its inherent speaker heterogeneity within the provided client IDs. TidyVoice currently contains training and test data from over 212,000 monolingual speakers (Tidy-M) and around 4,500 multilingual speakers (Tidy-X) from which we derive two distinct conditions. The Tidy-M condition contains target and non-target trials from monolingual speakers across 81 languages. The Tidy-X condition contains target and non-target trials from multilingual speakers in both same- and cross-language trials. We employ two architectures of ResNet models, achieving a 0.35% EER by fine-tuning on our comprehensive Tidy-M partition. Moreover, we show that this fine-tuning enhances the model's generalization, improving performance on unseen conversational interview data from the CANDOR corpus. The complete dataset, evaluation trials, and our models are publicly released to provide a new resource for the community.