🤖 AI Summary
This study addresses the challenge of modeling interpretable speech quality dimensions to characterize speaker styles in atypical and affective speech. We propose the first systematically defined set of seven phonatory quality primitives—e.g., clarity, glottalization, and breathiness—and train linear probes on frozen Wav2Vec 2.0 embeddings using the Speech Accessibility Project (SAP) dataset. Our method achieves strong interpretability and cross-domain generalizability: it attains state-of-the-art performance on SAP and, for the first time, enables zero-shot transfer across languages (English/Italian) and tasks (atypical vs. emotional speech), with average accuracy exceeding 82%. These results validate the framework’s efficacy as a universal, style-sensitive speech representation. The core contribution is the first interpretable, transferable, and multilingual-compatible speech quality dimension modeling framework.
📝 Abstract
Perceptual voice quality dimensions describe key characteristics of atypical speech and other speech modulations. Here we develop and evaluate voice quality models for seven voice and speech dimensions (intelligibility, imprecise consonants, harsh voice, naturalness, monoloudness, monopitch, and breathiness). Probes were trained on the public Speech Accessibility (SAP) project dataset with 11,184 samples from 434 speakers, using embeddings from frozen pre-trained models as features. We found that our probes had both strong performance and strong generalization across speech elicitation categories in the SAP dataset. We further validated zero-shot performance on additional datasets, encompassing unseen languages and tasks: Italian atypical speech, English atypical speech, and affective speech. The strong zero-shot performance and the interpretability of results across an array of evaluations suggests the utility of using voice quality dimensions in speaking style-related tasks.