🤖 AI Summary
To address speaker bias and poor generalization in fine-tuning large self-supervised speech models for dysarthric and elderly speech recognition—challenged by limited data and high inter-speaker and inter-pathology heterogeneity—this paper proposes a structured speaker–deficiency disentanglement adaptation mechanism. It is the first to explicitly decouple speaker identity from articulatory impairment or aging severity, enabling zero-shot generalization across arbitrary combinations of seen/unseen speakers and pathology types. Leveraging SSL backbones (HuBERT, Wav2Vec2-Conformer), we design modular, composable adapters—speaker adapters and deficiency adapters—that support both supervised fine-tuning and unsupervised test-time adaptation. Evaluated on UASpeech and DementiaBank Pitt, our method achieves up to a 3.01% absolute WER reduction. On UASpeech, it attains a new state-of-the-art WER of 19.45% (49.34% for severely unintelligible utterances; 33.17% for out-of-vocabulary words).
📝 Abstract
Data-intensive fine-tuning of speech foundation models (SFMs) to scarce and diverse dysarthric and elderly speech leads to data bias and poor generalization to unseen speakers. This paper proposes novel structured speaker-deficiency adaptation approaches for SSL pre-trained SFMs on such data. Speaker and speech deficiency invariant SFMs were constructed in their supervised adaptive fine-tuning stage to reduce undue bias to training data speakers, and serves as a more neutral and robust starting point for test time unsupervised adaptation. Speech variability attributed to speaker identity and speech impairment severity, or aging induced neurocognitive decline, are modelled using separate adapters that can be combined together to model any seen or unseen speaker. Experiments on the UASpeech dysarthric and DementiaBank Pitt elderly speech corpora suggest structured speaker-deficiency adaptation of HuBERT and Wav2vec2-conformer models consistently outperforms baseline SFMs using either: a) no adapters; b) global adapters shared among all speakers; or c) single attribute adapters modelling speaker or deficiency labels alone by statistically significant WER reductions up to 3.01% and 1.50% absolute (10.86% and 6.94% relative) on the two tasks respectively. The lowest published WER of 19.45% (49.34% on very low intelligibility, 33.17% on unseen words) is obtained on the UASpeech test set of 16 dysarthric speakers.