ASR Error Correction using Large Language Models

📅 2024-09-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses text correction for black-box automatic speech recognition (ASR) systems. We propose a fine-tuning-free, large language model (LLM)-based post-processing framework that leverages both N-best hypotheses and ASR lattices as structural constraints. Our method is the first to integrate lattice topology and N-best candidate information directly into LLM inference (e.g., ChatGPT), enabling zero-shot, cross-ASR-system robust correction. Key contributions include: (1) a lattice-constrained decoding mechanism that explicitly models ASR uncertainty; (2) a plug-and-play correction paradigm requiring no retraining; and (3) unified post-processing support for diverse ASR architectures—including Transducer and encoder-decoder models. Evaluated on three standard benchmarks, our approach significantly improves transcription readability and reduces word error rate (WER), demonstrating its effectiveness and generalizability as a lightweight model integration solution.

Technology Category

Application Category

📝 Abstract
Error correction (EC) models play a crucial role in refining Automatic Speech Recognition (ASR) transcriptions, enhancing the readability and quality of transcriptions. Without requiring access to the underlying code or model weights, EC can improve performance and provide domain adaptation for black-box ASR systems. This work investigates the use of large language models (LLMs) for error correction across diverse scenarios. 1-best ASR hypotheses are commonly used as the input to EC models. We propose building high-performance EC models using ASR N-best lists which should provide more contextual information for the correction process. Additionally, the generation process of a standard EC model is unrestricted in the sense that any output sequence can be generated. For some scenarios, such as unseen domains, this flexibility may impact performance. To address this, we introduce a constrained decoding approach based on the N-best list or an ASR lattice. Finally, most EC models are trained for a specific ASR system requiring retraining whenever the underlying ASR system is changed. This paper explores the ability of EC models to operate on the output of different ASR systems. This concept is further extended to zero-shot error correction using LLMs, such as ChatGPT. Experiments on three standard datasets demonstrate the efficacy of our proposed methods for both Transducer and attention-based encoder-decoder ASR systems. In addition, the proposed method can serve as an effective method for model ensembling.
Problem

Research questions and friction points this paper is trying to address.

Automatic Speech Recognition
Accuracy Improvement
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
ASR Error Correction
Zero-shot Correction
🔎 Similar Papers
No similar papers found.