🤖 AI Summary
Traditional TTS systems rely on studio-recorded, high-fidelity speech and thus suffer from poor generalization to real-world noisy environments; moreover, large-scale, in-the-wild speech-text paired datasets are scarce. To address this, we propose TITW—the first fully automated, large-scale in-the-wild TTS dataset—comprising two subsets: TITW-Easy and TITW-Hard. Constructed via metadata-driven crawling from VoxCeleb1, TITW integrates ASR-based transcription, DNSMOS-based quality assessment, and noise-aware data augmentation. Our approach eliminates reliance on manual annotation and clean speech, establishing a new benchmark for noise-robust TTS. TITW-Easy achieves UTMOS ≥ 3.0, while TITW-Hard provides challenging evaluation cases (UTMOS < 2.8). The dataset is publicly released, significantly advancing both research and practical deployment of in-the-wild TTS systems.
📝 Abstract
Traditional Text-to-Speech (TTS) systems rely on studio-quality speech recorded in controlled settings.a Recently, an effort known as noisy-TTS training has emerged, aiming to utilize in-the-wild data. However, the lack of dedicated datasets has been a significant limitation. We introduce the TTS In the Wild (TITW) dataset, which is publicly available, created through a fully automated pipeline applied to the VoxCeleb1 dataset. It comprises two training sets: TITW-Hard, derived from the transcription, segmentation, and selection of raw VoxCeleb1 data, and TITW-Easy, which incorporates additional enhancement and data selection based on DNSMOS. State-of-the-art TTS models achieve over 3.0 UTMOS score with TITW-Easy, while TITW-Hard remains difficult showing UTMOS below 2.8.