🤖 AI Summary
This work addresses the challenge of disentangling speaker identity from pathological speech characteristics in dysarthric voice synthesis, a task hindered by high variability and scarce annotated data. To enhance controllability and robustness, the authors propose a prototype-guided disentanglement framework built upon a pre-trained text-to-speech (TTS) model, which separates vocal timbre and dysarthric articulation within a unified latent space. A pathology prototype codebook is introduced to yield interpretable representations, while dual classifiers combined with gradient reversal layers enforce invariance of speaker embeddings to pathological attributes, substantially improving disentanglement. Evaluated on the TORGO dataset, the method enables bidirectional conversion between healthy and dysarthric speech, significantly enhancing both speaker perceptual quality in reconstructed voices and downstream automatic speech recognition (ASR) performance.
📝 Abstract
Dysarthric speech exhibits high variability and limited labeled data, posing major challenges for both automatic speech recognition (ASR) and assistive speech technologies. Existing approaches rely on synthetic data augmentation or speech reconstruction, yet often entangle speaker identity with pathological articulation, limiting controllability and robustness. In this paper, we propose ProtoDisent-TTS, a prototype-based disentanglement TTS framework built on a pre-trained text-to-speech backbone that factorizes speaker timbre and dysarthric articulation within a unified latent space. A pathology prototype codebook provides interpretable and controllable representations of healthy and dysarthric speech patterns, while a dual-classifier objective with a gradient reversal layer enforces invariance of speaker embeddings to pathological attributes. Experiments on the TORGO dataset demonstrate that this design enables bidirectional transformation between healthy and dysarthric speech, leading to consistent ASR performance gains and robust, speaker-aware speech reconstruction.