🤖 AI Summary
This work addresses the limited effectiveness of direct speaker-specific fine-tuning (SS-FT) of general-purpose pre-trained automatic speech recognition (ASR) models on highly variable disordered speech, such as that associated with dysarthria or aphasia. To overcome this challenge, the authors propose a two-stage adaptation framework: first performing speaker-independent fine-tuning (SI-FT) on multi-speaker disordered speech data, followed by speaker-specific fine-tuning. This study provides the first systematic validation of SI-FT as an effective initialization strategy for personalization. Evaluated on disordered speech benchmarks including AphasiaBank and UA-Speech, the approach significantly improves recognition accuracy, while incurring only controlled performance degradation on out-of-domain canonical speech datasets such as TED-LIUM v3 and FLEURS. Experiments using Whisper-Large-v3 and Qwen3-ASR consistently demonstrate the superiority of the two-stage strategy over direct SS-FT.
📝 Abstract
Personalizing automatic speech recognition (ASR) systems for non-normative speech, such as dysarthric and aphasic speech, is challenging. While speaker-specific fine-tuning (SS-FT) is widely used, it is typically initialized directly from a generic pre-trained model. Whether speaker-independent adaptation provides a stronger initialization prior under such mismatch remains unclear. In this work, we propose a two-stage adaptation framework consisting of speaker-independent fine-tuning (SI-FT) on multi-speaker non-normative data followed by SS-FT, and evaluate it through a controlled comparison with direct SS-FT under identical per-speaker conditions. Experiments on AphasiaBank and UA-Speech with Whisper-Large-v3 and Qwen3-ASR, alongside evaluation on typical-speech datasets TED-LIUM v3 and FLEURS, show that two-stage adaptation consistently improves personalization while maintaining manageable out-of-domain (OOD) trade-offs.