Training Articulatory Inversion Models for Inter-Speaker Consistency

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Self-supervised learning (SSL) models for speech-to-articulation inversion exhibit significant cross-speaker articulatory target inconsistency, yet existing evaluation and training paradigms rely on ground-truth articulatory labels—unavailable in many realistic scenarios. Method: We propose a label-free evaluation and training framework leveraging only speech data. First, we introduce a cross-speaker articulatory consistency metric based on minimal pairs. Second, we design articulation-consistency-oriented adaptation strategies for SSL models to enhance generalization across single- and multi-speaker settings in English and Russian. Contribution/Results: Experiments confirm substantial cross-speaker articulatory deviation in current SSL models. Our method significantly improves articulatory target consistency (p < 0.01) in both languages. This work is the first to apply minimal pairs to articulatory consistency assessment, establishing a verifiable, language-general, and disentangled pathway for articulatory representation learning—with empirical validation across typologically distinct languages.

Technology Category

Application Category

📝 Abstract
Acoustic-to-Articulatory Inversion (AAI) attempts to model the inverse mapping from speech to articulation. Exact articulatory prediction from speech alone may be impossible, as speakers can choose different forms of articulation seemingly without reference to their vocal tract structure. However, once a speaker has selected an articulatory form, their productions vary minimally. Recent works in AAI have proposed adapting Self-Supervised Learning (SSL) models to single-speaker datasets, claiming that these single-speaker models provide a universal articulatory template. In this paper, we investigate whether SSL-adapted models trained on single and multi-speaker data produce articulatory targets which are consistent across speaker identities for English and Russian. We do this through the use of a novel evaluation method which extracts articulatory targets using minimal pair sets. We also present a training method which can improve inter-speaker consistency using only speech data.
Problem

Research questions and friction points this paper is trying to address.

Investigates inter-speaker consistency in articulatory inversion models
Evaluates SSL-adapted models for English and Russian speech data
Proposes training method to enhance cross-speaker articulatory consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

SSL-adapted models for multi-speaker consistency
Novel evaluation using minimal pair sets
Training method improves inter-speaker consistency
🔎 Similar Papers
No similar papers found.
C
Charles McGhee
ALTA Institute/MIL, Department of Engineering, University of Cambridge, UK
Mark Gales
Mark Gales
Cambridge University
Speech ProcessingMachine Learning
K
K. Knill
ALTA Institute/MIL, Department of Engineering, University of Cambridge, UK