🤖 AI Summary
Automatic speech recognition (ASR) for children’s speech suffers from high error rates and poor robustness, particularly in conversational settings—yet systematic investigation remains scarce. Method: This work introduces the first comprehensive study of large language models (LLMs) for post-hoc correction of children’s conversational ASR outputs, under both zero-shot and fine-tuned regimes. We propose an LLM-based correction framework that fuses outputs from CTC-based ASRs (e.g., Wav2Vec2) and autoregressive ASRs (e.g., Whisper), incorporating zero-shot prompting and supervised fine-tuning strategies. Results: Experiments show LLMs substantially reduce word error rate (WER) for CTC-based ASRs (average reduction: 12.3%), but yield limited gains on context-sensitive utterances or Whisper outputs—revealing a strong architectural coupling between ASR decoding mechanisms and LLM correction efficacy. This work establishes a novel paradigm and empirical benchmark for LLM-augmented post-processing in pediatric ASR.
📝 Abstract
Automatic Speech Recognition (ASR) has recently shown remarkable progress, but accurately transcribing children's speech remains a significant challenge. Recent developments in Large Language Models (LLMs) have shown promise in improving ASR transcriptions. However, their applications in child speech including conversational scenarios are underexplored. In this study, we explore the use of LLMs in correcting ASR errors for conversational child speech. We demonstrate the promises and challenges of LLMs through experiments on two children's conversational speech datasets with both zero-shot and fine-tuned ASR outputs. We find that while LLMs are helpful in correcting zero-shot ASR outputs and fine-tuned CTC-based ASR outputs, it remains challenging for LLMs to improve ASR performance when incorporating contextual information or when using fine-tuned autoregressive ASR (e.g., Whisper) outputs.