Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dysarthric speech—characterized by high inter-speaker variability and slowed articulation—severely degrades automatic speech recognition (ASR) performance. To address this, we propose an unsupervised joint prosodic and acoustic transformation framework that maps dysarthric speech to temporally and spectrally normalized representations approximating healthy speech, thereby enhancing ASR robustness. Our key contributions are: (1) a novel syllable-level prosody modeling mechanism tailored to non-uniform segment durations and weakened rhythmic structure; and (2) an end-to-end, unpaired joint transformation built upon an extended RnV (Rhythm-and-Voice) framework, integrating LF-MMI acoustic modeling with Whisper fine-tuning, and incorporating syllable-level duration normalization and spectral mapping. On the TORGO dataset, the LF-MMI model achieves substantial WER reduction—up to 32.7% for severely dysarthric samples—while Whisper fine-tuning yields marginal gains, underscoring the critical role of explicit prosodic modeling.

Technology Category

Application Category

📝 Abstract
Automatic speech recognition (ASR) systems struggle with dysarthric speech due to high inter-speaker variability and slow speaking rates. To address this, we explore dysarthric-to-healthy speech conversion for improved ASR performance. Our approach extends the Rhythm and Voice (RnV) conversion framework by introducing a syllable-based rhythm modeling method suited for dysarthric speech. We assess its impact on ASR by training LF-MMI models and fine-tuning Whisper on converted speech. Experiments on the Torgo corpus reveal that LF-MMI achieves significant word error rate reductions, especially for more severe cases of dysarthria, while fine-tuning Whisper on converted data has minimal effect on its performance. These results highlight the potential of unsupervised rhythm and voice conversion for dysarthric ASR. Code available at: https://github.com/idiap/RnV
Problem

Research questions and friction points this paper is trying to address.

Improving ASR performance for dysarthric speech
Converting dysarthric-to-healthy speech rhythm and voice
Reducing word error rates in severe dysarthria cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Syllable-based rhythm modeling for dysarthric speech
Extends Rhythm and Voice conversion framework
Trains LF-MMI models on converted speech
🔎 Similar Papers
No similar papers found.
Karl El Hajal
Karl El Hajal
EPFL
Speech ProcessingNatural Language ProcessingMachine Learning
Enno Hermann
Enno Hermann
Postdoc, IDIAP Research Institute, Switzerland
Speech RecognitionSpeech SynthesisNatural Language ProcessingMachine Learning
S
Sevada Hovsepyan
Idiap Research Institute, CH-1920 Martigny, Switzerland
M
Mathew Magimai.-Doss
Idiap Research Institute, CH-1920 Martigny, Switzerland