Personalized Fine-Tuning with Controllable Synthetic Speech from LLM-Generated Transcripts for Dysarthric Speech Recognition

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic speech recognition (ASR) for dysarthric speech suffers from low accuracy due to high acoustic variability and limited labeled data. Method: This paper proposes an LLM-driven personalized ASR optimization framework, introducing a novel integrated paradigm of “controllable speech synthesis + speaker adaptation + parameter-efficient fine-tuning”: (i) an LLM generates target text and guides Parler-TTS to synthesize high-fidelity, content-controlled dysarthric speech; (ii) x-vectors model speaker-specific characteristics; and (iii) AdaLoRA performs lightweight fine-tuning in the wav2vec 2.0 feature space, decoupling linguistic content from individual acoustic traits. Results: The method reduces relative word error rate (WER) by 23% compared to full-parameter fine-tuning; incorporating synthetic data yields an additional 7% relative WER reduction, achieving over 30% total relative WER improvement. It significantly enhances both personalization efficiency and ASR robustness for dysarthric speech.

Technology Category

Application Category

📝 Abstract
In this work, we present our submission to the Speech Accessibility Project challenge for dysarthric speech recognition. We integrate parameter-efficient fine-tuning with latent audio representations to improve an encoder-decoder ASR system. Synthetic training data is generated by fine-tuning Parler-TTS to mimic dysarthric speech, using LLM-generated prompts for corpus-consistent target transcripts. Personalization with x-vectors consistently reduces word error rates (WERs) over non-personalized fine-tuning. AdaLoRA adapters outperform full fine-tuning and standard low-rank adaptation, achieving relative WER reductions of ~23% and ~22%, respectively. Further improvements (~5% WER reduction) come from incorporating wav2vec 2.0-based audio representations. Training with synthetic dysarthric speech yields up to ~7% relative WER improvement over personalized fine-tuning alone.
Problem

Research questions and friction points this paper is trying to address.

Improving dysarthric speech recognition using synthetic speech
Personalizing ASR systems with x-vectors to reduce WER
Enhancing performance via AdaLoRA adapters and wav2vec 2.0
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient fine-tuning with latent audio
Synthetic dysarthric speech from LLM prompts
Personalization via x-vectors reduces WER
🔎 Similar Papers
No similar papers found.