🤖 AI Summary
ASR transcription errors severely degrade clinical dialogue summarization quality, while authentic noisy medical data remains scarce. To address this, we propose the first LLM-based in-context learning framework for ASR noise modeling and synthetic text generation—requiring no real speech input—enabling high-fidelity, controllable, error-injected clinical dialogue texts and eliminating reliance on speech-text pairs typical of conventional data augmentation. Our method integrates LLM instruction tuning, medical-domain-controllable text generation, and joint training with neural summarization models (BART/LED). Evaluated across multiple clinical summarization benchmarks under realistic ASR error conditions, our approach achieves a +4.2-point improvement in ROUGE-L and a 17% gain in factual consistency over baselines. Moreover, it demonstrates strong zero-shot generalization to unseen domains and error patterns.
📝 Abstract
Automatic Speech Recognition (ASR) systems are pivotal in transcribing speech into text, yet the errors they introduce can significantly degrade the performance of downstream tasks like summarization. This issue is particularly pronounced in clinical dialogue summarization, a low-resource domain where supervised data for fine-tuning is scarce, necessitating the use of ASR models as black-box solutions. Employing conventional data augmentation for enhancing the noise robustness of summarization models is not feasible either due to the unavailability of sufficient medical dialogue audio recordings and corresponding ASR transcripts. To address this challenge, we propose MEDSAGE, an approach for generating synthetic samples for data augmentation using Large Language Models (LLMs). Specifically, we leverage the in-context learning capabilities of LLMs and instruct them to generate ASR-like errors based on a few available medical dialogue examples with audio recordings. Experimental results show that LLMs can effectively model ASR noise, and incorporating this noisy data into the training process significantly improves the robustness and accuracy of medical dialogue summarization systems. This approach addresses the challenges of noisy ASR outputs in critical applications, offering a robust solution to enhance the reliability of clinical dialogue summarization.