PAC: Pronunciation-Aware Contextualized Large Language Model-based Automatic Speech Recognition

πŸ“… 2025-09-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses two key challenges in LLM-based automatic speech recognition (ASR): weak phonemic modeling and poor robustness in distinguishing homophones. To this end, we propose a pronunciation-aware contextual modeling framework. Our core contributions are: (1) grapheme-phoneme interleaved joint modeling, explicitly integrating orthographic and phonemic structures; (2) incorporation of grapheme-level perturbations and noisy label sampling to enhance phoneme cue utilization; and (3) integration of pronunciation-guided in-context learning with pronunciation-discriminative reinforcement learning to improve homophone disambiguation in context. Evaluated on LibriSpeech and AISHELL-1, our method achieves relative word error rate reductions of 30.2% and 53.8%, respectively, and reduces long-tail word bias errors by 31.8% and 60.5%. These results demonstrate substantial improvements in recognizing rare and long-tail vocabulary.

Technology Category

Application Category

πŸ“ Abstract
This paper presents a Pronunciation-Aware Contextualized (PAC) framework to address two key challenges in Large Language Model (LLM)-based Automatic Speech Recognition (ASR) systems: effective pronunciation modeling and robust homophone discrimination. Both are essential for raw or long-tail word recognition. The proposed approach adopts a two-stage learning paradigm. First, we introduce a pronunciation-guided context learning method. It employs an interleaved grapheme-phoneme context modeling strategy that incorporates grapheme-only distractors, encouraging the model to leverage phonemic cues for accurate recognition. Then, we propose a pronunciation-discriminative reinforcement learning method with perturbed label sampling to further enhance the modelΕ› ability to distinguish contextualized homophones. Experimental results on the public English Librispeech and Mandarin AISHELL-1 datasets indicate that PAC: (1) reduces relative Word Error Rate (WER) by 30.2% and 53.8% compared to pre-trained LLM-based ASR models, and (2) achieves 31.8% and 60.5% relative reductions in biased WER for long-tail words compared to strong baselines, respectively.
Problem

Research questions and friction points this paper is trying to address.

Modeling pronunciation in LLM-based ASR systems
Discriminating homophones for accurate speech recognition
Improving recognition of raw and long-tail words
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interleaved grapheme-phoneme context modeling
Pronunciation-guided contextual learning method
Pronunciation-discriminative reinforcement learning sampling
πŸ”Ž Similar Papers
No similar papers found.