🤖 AI Summary
Automatic speech recognition (ASR) for dysarthric speech suffers from low accuracy due to high acoustic variability and limited labeled data.
Method: This paper proposes an LLM-driven personalized ASR optimization framework, introducing a novel integrated paradigm of “controllable speech synthesis + speaker adaptation + parameter-efficient fine-tuning”: (i) an LLM generates target text and guides Parler-TTS to synthesize high-fidelity, content-controlled dysarthric speech; (ii) x-vectors model speaker-specific characteristics; and (iii) AdaLoRA performs lightweight fine-tuning in the wav2vec 2.0 feature space, decoupling linguistic content from individual acoustic traits.
Results: The method reduces relative word error rate (WER) by 23% compared to full-parameter fine-tuning; incorporating synthetic data yields an additional 7% relative WER reduction, achieving over 30% total relative WER improvement. It significantly enhances both personalization efficiency and ASR robustness for dysarthric speech.
📝 Abstract
In this work, we present our submission to the Speech Accessibility Project challenge for dysarthric speech recognition. We integrate parameter-efficient fine-tuning with latent audio representations to improve an encoder-decoder ASR system. Synthetic training data is generated by fine-tuning Parler-TTS to mimic dysarthric speech, using LLM-generated prompts for corpus-consistent target transcripts. Personalization with x-vectors consistently reduces word error rates (WERs) over non-personalized fine-tuning. AdaLoRA adapters outperform full fine-tuning and standard low-rank adaptation, achieving relative WER reductions of ~23% and ~22%, respectively. Further improvements (~5% WER reduction) come from incorporating wav2vec 2.0-based audio representations. Training with synthetic dysarthric speech yields up to ~7% relative WER improvement over personalized fine-tuning alone.