🤖 AI Summary
Self-supervised learning (SSL) models for speech-to-articulation inversion exhibit significant cross-speaker articulatory target inconsistency, yet existing evaluation and training paradigms rely on ground-truth articulatory labels—unavailable in many realistic scenarios.
Method: We propose a label-free evaluation and training framework leveraging only speech data. First, we introduce a cross-speaker articulatory consistency metric based on minimal pairs. Second, we design articulation-consistency-oriented adaptation strategies for SSL models to enhance generalization across single- and multi-speaker settings in English and Russian.
Contribution/Results: Experiments confirm substantial cross-speaker articulatory deviation in current SSL models. Our method significantly improves articulatory target consistency (p < 0.01) in both languages. This work is the first to apply minimal pairs to articulatory consistency assessment, establishing a verifiable, language-general, and disentangled pathway for articulatory representation learning—with empirical validation across typologically distinct languages.
📝 Abstract
Acoustic-to-Articulatory Inversion (AAI) attempts to model the inverse mapping from speech to articulation. Exact articulatory prediction from speech alone may be impossible, as speakers can choose different forms of articulation seemingly without reference to their vocal tract structure. However, once a speaker has selected an articulatory form, their productions vary minimally. Recent works in AAI have proposed adapting Self-Supervised Learning (SSL) models to single-speaker datasets, claiming that these single-speaker models provide a universal articulatory template. In this paper, we investigate whether SSL-adapted models trained on single and multi-speaker data produce articulatory targets which are consistent across speaker identities for English and Russian. We do this through the use of a novel evaluation method which extracts articulatory targets using minimal pair sets. We also present a training method which can improve inter-speaker consistency using only speech data.