🤖 AI Summary
ASR systems achieve high accuracy on general speech recognition but exhibit elevated error rates—particularly for named entities (e.g., proper nouns)—thereby undermining robustness in downstream applications. To address this, we propose a context-aware ASR post-processing method leveraging large language models (LLMs): domain-specific local texts (e.g., lecture notes) provide ground-truth entity references to guide LLM-based correction of ASR outputs. This work represents the first integration of LLM reasoning capabilities with education-domain contextual knowledge for ASR error correction. We introduce NER-MIT-OpenCourseWare, the first benchmark dataset tailored for named entity recognition in educational speech, and demonstrate up to a 30% relative reduction in proper noun word error rate. Our approach significantly improves named entity recognition accuracy and enhances reliability for downstream educational applications.
📝 Abstract
With recent advances in modeling and the increasing amount of supervised training data, automatic speech recognition (ASR) systems have achieved remarkable performance on general speech. However, the word error rate (WER) of state-of-the-art ASR remains high for named entities. Since named entities are often the most critical keywords, misrecognizing them can affect all downstream applications, especially when the ASR system functions as the front end of a complex system. In this paper, we introduce a large language model (LLM) revision mechanism to revise incorrect named entities in ASR predictions by leveraging the LLM's reasoning ability as well as local context (e.g., lecture notes) containing a set of correct named entities. Finally, we introduce the NER-MIT-OpenCourseWare dataset, containing 45 hours of data from MIT courses for development and testing. On this dataset, our proposed technique achieves up to 30% relative WER reduction for named entities.