🤖 AI Summary
This work addresses unsupervised speech recognition improvement using unlabeled speech data. We propose an ASR-TTS co-iterative self-cycling framework: an initial ASR model generates pseudo-labels to train high-fidelity end-to-end TTS models (e.g., VITS or FastSpeech2), whose synthetic speech-text pairs are then used to refine the ASR model in reverse. This paradigm requires no manual annotations, teacher-model distillation, or cross-lingual parallel corpora, introducing the first closed-loop self-refinement mechanism for ASR. Evaluated on code-switched scenarios—Taiwanese-Mandarin and Mandarin-English—we achieve substantial WER reductions of 20% and 50%, respectively. Using only 6,000 hours of unlabeled speech and a small amount of text, we successfully build Twister, a domain-adapted multilingual ASR model, demonstrating the effectiveness and scalability of purely self-supervised, cross-lingual ASR optimization.
📝 Abstract
We propose a self-refining framework that enhances ASR performance with only unlabeled datasets. The process starts with an existing ASR model generating pseudo-labels on unannotated speech, which are then used to train a high-fidelity text-to-speech (TTS) system. Then, synthesized speech text pairs are bootstrapped into the original ASR system, completing the closed-loop self-improvement cycle. We demonstrated the effectiveness of the framework on Taiwanese Mandarin speech. Leveraging 6,000 hours of unlabeled speech, a moderate amount of text data, and synthetic content from the AI models, we adapt Whisper-large-v2 into a specialized model, Twister. Twister reduces error rates by up to 20% on Mandarin and 50% on Mandarin-English code-switching benchmarks compared to Whisper. Results highlight the framework as a compelling alternative to pseudo-labeling self-distillation approaches and provides a practical pathway for improving ASR performance in low-resource or domain-specific settings.