Fewer Hallucinations, More Verification: A Three-Stage LLM-Based Framework for ASR Error Correction

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ASR error correction must rectify erroneous tokens while preserving correct ones, yet direct LLM invocation often induces hallucinatory over-correction. To address this, we propose a training-free, annotation-free three-stage LLM framework: (1) pre-detection of potential errors; (2) chain-of-thought (CoT)-guided subtask decomposition and multi-round iterative correction; and (3) self-consistent reasoning process verification. Our approach introduces the first fine-tuning-free, external-knowledge-free multi-stage verification paradigm, substantially mitigating hallucination through structured reasoning and explicit process validation. Evaluated on AISHELL-1, AISHELL-2, and LibriSpeech, it achieves up to 21% relative reduction in character error rate (CER) and word error rate (WER), demonstrating significant improvements in correction reliability and robustness across diverse ASR outputs.

Technology Category

Application Category

📝 Abstract
Automatic Speech Recognition (ASR) error correction aims to correct recognition errors while preserving accurate text. Although traditional approaches demonstrate moderate effectiveness, LLMs offer a paradigm that eliminates the need for training and labeled data. However, directly using LLMs will encounter hallucinations problem, which may lead to the modification of the correct text. To address this problem, we propose the Reliable LLM Correction Framework (RLLM-CF), which consists of three stages: (1) error pre-detection, (2) chain-of-thought sub-tasks iterative correction, and (3) reasoning process verification. The advantage of our method is that it does not require additional information or fine-tuning of the model, and ensures the correctness of the LLM correction under multi-pass programming. Experiments on AISHELL-1, AISHELL-2, and Librispeech show that the GPT-4o model enhanced by our framework achieves 21%, 11%, 9%, and 11.4% relative reductions in CER/WER.
Problem

Research questions and friction points this paper is trying to address.

Reducing hallucinations in LLM-based ASR error correction
Ensuring verification of corrected ASR text accuracy
Eliminating need for training data in ASR correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage LLM framework for ASR correction
Error pre-detection and iterative correction
Multi-pass verification reduces hallucinations
🔎 Similar Papers
No similar papers found.
Yangui Fang
Yangui Fang
Huazhong University of Science and Technology
Speech LLMASR
B
Baixu Cheng
Huazhong University of Science and Technology, School of Electronic Information and Communications
J
Jing Peng
MoE Key Lab of Artificial Intelligence, AI Institute, X-LANCE Lab, Shanghai Jiao Tong University, Shanghai, China
X
Xu Li
AISpeech Ltd, Suzhou, China
Y
Yu Xi
MoE Key Lab of Artificial Intelligence, AI Institute, X-LANCE Lab, Shanghai Jiao Tong University, Shanghai, China
C
Chengwei Zhang
Huazhong University of Science and Technology, School of Electronic Information and Communications
G
Guohui Zhong
Huazhong University of Science and Technology, School of Electronic Information and Communications