🤖 AI Summary
Phoneme-related tasks—including automatic speech recognition (ASR), phoneme recognition, grapheme-to-phoneme (G2P), and phoneme-to-grapheme (P2G) conversion—have traditionally been modeled in isolation, lacking a unified foundation. This work introduces POWSM, the first phoneme-level speech foundation model, which unifies these tasks within a single Whisper-style architecture through integrated self-supervised pretraining and multi-task fine-tuning. POWSM enables end-to-end bidirectional mapping among speech, phonemes, and graphemes. Its core innovation lies in dismantling task-specific boundaries to achieve cross-task generalization, substantially improving adaptability to low-resource languages. Experiments demonstrate that POWSM outperforms Wav2Vec2Phoneme and ZIPA on phoneme recognition, while achieving state-of-the-art or competitive performance on G2P, P2G, and ASR benchmarks. All code, pretrained models, and datasets are publicly released.
📝 Abstract
Recent advances in spoken language processing have led to substantial progress in phonetic tasks such as automatic speech recognition (ASR), phone recognition (PR), grapheme-to-phoneme conversion (G2P), and phoneme-to-grapheme conversion (P2G). Despite their conceptual similarity, these tasks have largely been studied in isolation, each relying on task-specific architectures and datasets. In this paper, we introduce POWSM (Phonetic Open Whisper-style Speech Model), the first unified framework capable of jointly performing multiple phone-related tasks. POWSM enables seamless conversion between audio, text (graphemes), and phones, opening up new possibilities for universal and low-resource speech processing. Our model outperforms or matches specialized PR models of similar size (Wav2Vec2Phoneme and ZIPA) while jointly supporting G2P, P2G, and ASR. Our training data, code and models are released to foster open science.