🤖 AI Summary
To address the weak generalizability and strong language dependency of multilingual speech representations, this work introduces articulatory features—grounded in speech physiology—into HuBERT’s multilingual pretraining for the first time, thereby constructing language-agnostic speech representations and incorporating phonetic inductive biases. Methodologically, we employ multilingual HuBERT continued pretraining with joint supervision from articulatory features and phonemes, evaluating performance using the ABX minimal-pair discriminability metric. Experiments across 55 languages demonstrate substantial improvements in context-invariance; our model achieves lower ABX error rates than all existing state-of-the-art multilingual self-supervised models. Moreover, only 10 hours of unsupervised fine-tuning suffices for efficient adaptation to unseen languages and informal speech. This work establishes a robust, transferable paradigm for low-resource speech modeling, advancing both representation learning and cross-lingual generalization in self-supervised speech processing.
📝 Abstract
This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models.