🤖 AI Summary
Stroke-induced language impairments are conventionally assessed manually, resulting in low efficiency and poor scalability. Method: This study pioneers the systematic fine-tuning of the Whisper large speech recognition model for clinical picture-naming tasks—specifically, transcribing patient speech and analyzing linguistic function. We adapted Whisper to the clinical domain using real-world aphasia speech data and leveraged its learned representations for classifying language quality. Results: Fine-tuning substantially reduced word error rates (WERR: −87.72% for healthy controls; −71.22% for patients); language quality classification achieved F1 Macro scores of 0.74–0.75. However, cross-dataset generalization remained limited. This work demonstrates that lightweight fine-tuning of off-the-shelf ASR models enables robust, automated, and scalable stroke language assessment—establishing a novel paradigm and providing empirical validation for clinical speech analytics.
📝 Abstract
Detailed assessment of language impairment following stroke remains a cognitively complex and clinician-intensive task, limiting timely and scalable diagnosis. Automatic Speech Recognition (ASR) foundation models offer a promising pathway to augment human evaluation through intelligent systems, but their effectiveness in the context of speech and language impairment remains uncertain. In this study, we evaluate whether Whisper, a state-of-the-art ASR foundation model, can be applied to transcribe and analyze speech from patients with stroke during a commonly used picture-naming task. We assess both verbatim transcription accuracy and the model's ability to support downstream prediction of language function, which has major implications for outcomes after stroke. Our results show that the baseline Whisper model performs poorly on single-word speech utterances. Nevertheless, fine-tuning Whisper significantly improves transcription accuracy (reducing Word Error Rate by 87.72% in healthy speech and 71.22% in speech from patients). Further, learned representations from the model enable accurate prediction of speech quality (average F1 Macro of 0.74 for healthy, 0.75 for patients). However, evaluations on an unseen (TORGO) dataset reveal limited generalizability, highlighting the inability of Whisper to perform zero-shot transcription of single-word utterances on out-of-domain clinical speech and emphasizing the need to adapt models to specific clinical populations. While challenges remain in cross-domain generalization, these findings highlight the potential of foundation models, when appropriately fine-tuned, to advance automated speech and language assessment and rehabilitation for stroke-related impairments.