π€ AI Summary
Existing deep biasing methods independently enhance subword units of context phrases, compromising their semantic integrity and degrading ASR performance. To address this, we propose a phrase-level contextualized speech recognition framework. Our method introduces an encoder enhancement architecture, pioneering phrase-level dynamic vocabulary prediction coupled with a confidence-driven activation decoding mechanism to model holistic semantic units. Additionally, we design a frame-to-phrase bias loss function that explicitly enforces output completeness at the phrase level and suppresses erroneous biasing. Evaluated on LibriSpeech and WenetSpeech, our approach achieves relative WER reductions of 28.31% and 23.49%, respectively, while context phrase WER drops dramatically by 72.04% and 75.69%. These results demonstrate substantial improvements in both robustness and accuracy for critical phrase recognition.
π Abstract
Deep biasing improves automatic speech recognition (ASR) performance by incorporating contextual phrases. However, most existing methods enhance subwords in a contextual phrase as independent units, potentially compromising contextual phrase integrity, leading to accuracy reduction. In this paper, we propose an encoder-based phrase-level contextualized ASR method that leverages dynamic vocabulary prediction and activation. We introduce architectural optimizations and integrate a bias loss to extend phrase-level predictions based on frame-level outputs. We also introduce a confidence-activated decoding method that ensures the complete output of contextual phrases while suppressing incorrect bias. Experiments on Librispeech and Wenetspeech datasets demonstrate that our approach achieves relative WER reductions of 28.31% and 23.49% compared to baseline, with the WER on contextual phrases decreasing relatively by 72.04% and 75.69%.