Large Language Models based ASR Error Correction for Child Conversations

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic speech recognition (ASR) for children’s speech suffers from high error rates and poor robustness, particularly in conversational settings—yet systematic investigation remains scarce. Method: This work introduces the first comprehensive study of large language models (LLMs) for post-hoc correction of children’s conversational ASR outputs, under both zero-shot and fine-tuned regimes. We propose an LLM-based correction framework that fuses outputs from CTC-based ASRs (e.g., Wav2Vec2) and autoregressive ASRs (e.g., Whisper), incorporating zero-shot prompting and supervised fine-tuning strategies. Results: Experiments show LLMs substantially reduce word error rate (WER) for CTC-based ASRs (average reduction: 12.3%), but yield limited gains on context-sensitive utterances or Whisper outputs—revealing a strong architectural coupling between ASR decoding mechanisms and LLM correction efficacy. This work establishes a novel paradigm and empirical benchmark for LLM-augmented post-processing in pediatric ASR.

Technology Category

Application Category

📝 Abstract
Automatic Speech Recognition (ASR) has recently shown remarkable progress, but accurately transcribing children's speech remains a significant challenge. Recent developments in Large Language Models (LLMs) have shown promise in improving ASR transcriptions. However, their applications in child speech including conversational scenarios are underexplored. In this study, we explore the use of LLMs in correcting ASR errors for conversational child speech. We demonstrate the promises and challenges of LLMs through experiments on two children's conversational speech datasets with both zero-shot and fine-tuned ASR outputs. We find that while LLMs are helpful in correcting zero-shot ASR outputs and fine-tuned CTC-based ASR outputs, it remains challenging for LLMs to improve ASR performance when incorporating contextual information or when using fine-tuned autoregressive ASR (e.g., Whisper) outputs.
Problem

Research questions and friction points this paper is trying to address.

Improving ASR accuracy for child conversational speech
Exploring LLMs in correcting ASR errors for children
Challenges in LLM-based correction for contextual child speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs correct ASR errors in child speech
Tested on zero-shot and fine-tuned ASR outputs
Challenges with contextual info and Whisper outputs
🔎 Similar Papers
No similar papers found.
Anfeng Xu
Anfeng Xu
University of Southern California
Speech ProcessingMultimodal AILLMDeep Learning
Tiantian Feng
Tiantian Feng
Postdoc Researcher
Health and BehaviorsWearable ComputingAffective ComputingSpeech and BiosignalResponsible ML
S
So Hyun Kim
School of Psychology, Korea University, South Korea
S
Somer Bishop
Weill Institute for Neurosciences, University of California, San Francisco, USA
C
Catherine Lord
Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, USA
S
Shrikanth Narayanan
Viterbi School of Engineering, University of Southern California, USA