🤖 AI Summary
Current ASR models (e.g., Whisper) exhibit severe performance degradation on atypical speech such as dysarthric speech, with word error rates (WER) reaching ~71%. To address this, we propose a two-stage hybrid modeling framework—“group-level + individual-level adaptation”—that jointly captures cross-speaker commonalities and speaker-specific idiosyncrasies. First, we pretrain Whisper’s speech encoder on dysarthric speech data to learn group-level acoustic representations (“dysarthric-normative” pretraining). Second, we freeze the language model and perform lightweight fine-tuning exclusively on the encoder using minimal individual data (“dysarthric-idiosyncratic” adaptation). This work presents the first systematic comparison between typical and atypical speech modeling paradigms. With only 128 personalized utterances, our method achieves a WER of 32%, outperforming a pure individual-adaptation baseline trained on 256 samples. It halves data requirements while improving generalizability, establishing an efficient, low-resource paradigm for atypical speech recognition.
📝 Abstract
State-of-the-art automatic speech recognition (ASR) models like Whisper, perform poorly on atypical speech, such as that produced by individuals with dysarthria. Past works for atypical speech have mostly investigated fully personalized (or idiosyncratic) models, but modeling strategies that can both generalize and handle idiosyncracy could be more effective for capturing atypical speech. To investigate this, we compare four strategies: (a) $ extit{normative}$ models trained on typical speech (no personalization), (b) $ extit{idiosyncratic}$ models completely personalized to individuals, (c) $ extit{dysarthric-normative}$ models trained on other dysarthric speakers, and (d) $ extit{dysarthric-idiosyncratic}$ models which combine strategies by first modeling normative patterns before adapting to individual speech. In this case study, we find the dysarthric-idiosyncratic model performs better than idiosyncratic approach while requiring less than half as much personalized data (36.43 WER with 128 train size vs 36.99 with 256). Further, we found that tuning the speech encoder alone (as opposed to the LM decoder) yielded the best results reducing word error rate from 71% to 32% on average. Our findings highlight the value of leveraging both normative (cross-speaker) and idiosyncratic (speaker-specific) patterns to improve ASR for underrepresented speech populations.