π€ AI Summary
This paper addresses two key challenges in LLM-based automatic speech recognition (ASR): weak phonemic modeling and poor robustness in distinguishing homophones. To this end, we propose a pronunciation-aware contextual modeling framework. Our core contributions are: (1) grapheme-phoneme interleaved joint modeling, explicitly integrating orthographic and phonemic structures; (2) incorporation of grapheme-level perturbations and noisy label sampling to enhance phoneme cue utilization; and (3) integration of pronunciation-guided in-context learning with pronunciation-discriminative reinforcement learning to improve homophone disambiguation in context. Evaluated on LibriSpeech and AISHELL-1, our method achieves relative word error rate reductions of 30.2% and 53.8%, respectively, and reduces long-tail word bias errors by 31.8% and 60.5%. These results demonstrate substantial improvements in recognizing rare and long-tail vocabulary.
π Abstract
This paper presents a Pronunciation-Aware Contextualized (PAC) framework to address two key challenges in Large Language Model (LLM)-based Automatic Speech Recognition (ASR) systems: effective pronunciation modeling and robust homophone discrimination. Both are essential for raw or long-tail word recognition. The proposed approach adopts a two-stage learning paradigm. First, we introduce a pronunciation-guided context learning method. It employs an interleaved grapheme-phoneme context modeling strategy that incorporates grapheme-only distractors, encouraging the model to leverage phonemic cues for accurate recognition. Then, we propose a pronunciation-discriminative reinforcement learning method with perturbed label sampling to further enhance the modelΕ ability to distinguish contextualized homophones. Experimental results on the public English Librispeech and Mandarin AISHELL-1 datasets indicate that PAC: (1) reduces relative Word Error Rate (WER) by 30.2% and 53.8% compared to pre-trained LLM-based ASR models, and (2) achieves 31.8% and 60.5% relative reductions in biased WER for long-tail words compared to strong baselines, respectively.