🤖 AI Summary
ASR error correction must rectify erroneous tokens while preserving correct ones, yet direct LLM invocation often induces hallucinatory over-correction. To address this, we propose a training-free, annotation-free three-stage LLM framework: (1) pre-detection of potential errors; (2) chain-of-thought (CoT)-guided subtask decomposition and multi-round iterative correction; and (3) self-consistent reasoning process verification. Our approach introduces the first fine-tuning-free, external-knowledge-free multi-stage verification paradigm, substantially mitigating hallucination through structured reasoning and explicit process validation. Evaluated on AISHELL-1, AISHELL-2, and LibriSpeech, it achieves up to 21% relative reduction in character error rate (CER) and word error rate (WER), demonstrating significant improvements in correction reliability and robustness across diverse ASR outputs.
📝 Abstract
Automatic Speech Recognition (ASR) error correction aims to correct recognition errors while preserving accurate text. Although traditional approaches demonstrate moderate effectiveness, LLMs offer a paradigm that eliminates the need for training and labeled data. However, directly using LLMs will encounter hallucinations problem, which may lead to the modification of the correct text. To address this problem, we propose the Reliable LLM Correction Framework (RLLM-CF), which consists of three stages: (1) error pre-detection, (2) chain-of-thought sub-tasks iterative correction, and (3) reasoning process verification. The advantage of our method is that it does not require additional information or fine-tuning of the model, and ensures the correctness of the LLM correction under multi-pass programming. Experiments on AISHELL-1, AISHELL-2, and Librispeech show that the GPT-4o model enhanced by our framework achieves 21%, 11%, 9%, and 11.4% relative reductions in CER/WER.