MauBERT: Universal Phonetic Inductive Biases for Few-Shot Acoustic Units Discovery

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalizability and strong language dependency of multilingual speech representations, this work introduces articulatory features—grounded in speech physiology—into HuBERT’s multilingual pretraining for the first time, thereby constructing language-agnostic speech representations and incorporating phonetic inductive biases. Methodologically, we employ multilingual HuBERT continued pretraining with joint supervision from articulatory features and phonemes, evaluating performance using the ABX minimal-pair discriminability metric. Experiments across 55 languages demonstrate substantial improvements in context-invariance; our model achieves lower ABX error rates than all existing state-of-the-art multilingual self-supervised models. Moreover, only 10 hours of unsupervised fine-tuning suffices for efficient adaptation to unseen languages and informal speech. This work establishes a robust, transferable paradigm for low-resource speech modeling, advancing both representation learning and cross-lingual generalization in self-supervised speech processing.

Technology Category

Application Category

📝 Abstract
This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models.
Problem

Research questions and friction points this paper is trying to address.

Develop multilingual phonetic representation learning using articulatory features
Enhance cross-lingual acoustic unit discovery with few-shot adaptation
Improve context-invariant speech representations for unseen languages and casual speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual HuBERT extension with articulatory feature supervision
Predicts articulatory features or phones across 55 languages
Achieves context-invariant representations with minimal fine-tuning
🔎 Similar Papers
No similar papers found.
A
Angelo Ortiz Tandazo
ENS, PSL Research University, EHESS, CNRS, Paris, France
M
Manel Khentout
ENS, PSL Research University, EHESS, CNRS, Paris, France
Y
Youssef Benchekroun
Meta AI Research, France
Thomas Hueber
Thomas Hueber
Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
Emmanuel Dupoux
Emmanuel Dupoux
Professor of Cognitive Psychology, Ecole des Hautes Etudes en Sciences Sociales, Paris
Cognitive developmentpsycholinguisticslanguage acquisitioncognitive modelingmachine learning