🤖 AI Summary
This work addresses key challenges in automatic speech recognition (ASR) full-text error correction—namely, poor stability, weak controllability, incomplete corrections, and semantic incoherence—by proposing a Segmented Chain-of-Correction (CoC) framework. CoC leverages the initially recognized transcript as guidance and performs iterative, segment-wise corrections via multi-turn structured dialogue, while incorporating global context modeling to enhance semantic consistency and discourse coherence. It introduces, for the first time, a chain-based segmentation correction mechanism integrated with inverse text normalization and punctuation restoration, complemented by a correction-threshold tuning strategy enabling robust generalization to multi-thousand-word documents. Evaluated on the ChFT dataset via fine-tuning open-source large language models under a multi-turn dialogue paradigm, CoC achieves significant reductions in word error rate, substantial improvements in punctuation accuracy, and enhanced semantic coherence—outperforming all existing baselines across all metrics.
📝 Abstract
Full-text error correction with Large Language Models (LLMs) for Automatic Speech Recognition (ASR) has gained increased attention due to its potential to correct errors across long contexts and address a broader spectrum of error types, including punctuation restoration and inverse text normalization. Nevertheless, many challenges persist, including issues related to stability, controllability, completeness, and fluency. To mitigate these challenges, this paper proposes the Chain of Correction (CoC) for full-text error correction with LLMs, which corrects errors segment by segment using pre-recognized text as guidance within a regular multi-turn chat format. The CoC also uses pre-recognized full text for context, allowing the model to better grasp global semantics and maintain a comprehensive overview of the entire content. Utilizing the open-sourced full-text error correction dataset ChFT, we fine-tune a pre-trained LLM to evaluate the performance of the CoC framework. Experimental results demonstrate that the CoC effectively corrects errors in full-text ASR outputs, significantly outperforming baseline and benchmark systems. We further analyze how to set the correction threshold to balance under-correction and over-rephrasing, extrapolate the CoC model on extremely long ASR outputs, and investigate whether other types of information can be employed to guide the error correction process.