🤖 AI Summary
To address the scarcity of high-quality training data for Open Whisper–style Speech Models (OWSM) and the pervasive data quality issues—such as erroneous language labels and audio-text misalignment—in large-scale web-crawled corpora (e.g., YODAS), this work constructs a high-fidelity multilingual speech corpus spanning 75 languages and totaling 166,000 hours. We propose the first scalable, open-source data cleaning pipeline, integrating WhisperX for forced alignment, fastText for language identification, and CPC-based methods for robust audio-text realignment and noise filtering. OWSM v4 achieves state-of-the-art performance across multilingual ASR benchmarks, surpassing all prior OWSM versions and matching or exceeding Whisper-large-v3 and MMS-1B on low-resource languages. Notably, it is the first academically open model to attain industrial-grade multilingual ASR capability. All data, models, and code are publicly released under open licenses.
📝 Abstract
The Open Whisper-style Speech Models (OWSM) project has developed a series of fully open speech foundation models using academic-scale resources, but their training data remains insufficient. This work enhances OWSM by integrating YODAS, a large-scale web-crawled dataset with a Creative Commons license. However, incorporating YODAS is nontrivial due to its wild nature, which introduces challenges such as incorrect language labels and audio-text misalignments. To address this, we develop a scalable data-cleaning pipeline using public toolkits, yielding a dataset with 166,000 hours of speech across 75 languages. Our new series of OWSM v4 models, trained on this curated dataset alongside existing OWSM data, significantly outperform previous versions on multilingual benchmarks. Our models even match or surpass frontier industrial models like Whisper and MMS in multiple scenarios. We will publicly release the cleaned YODAS data, pre-trained models, and all associated scripts via the ESPnet toolkit.