🤖 AI Summary
To address representational confusion arising from language-phoneme feature coupling in multilingual speech recognition, this paper proposes a disentangled quantization–enhanced Data2vec framework. The core innovation is the first integration of task-oriented dual online K-means quantizers into the Data2vec architecture: one performs language clustering on shallow features, while the other conducts phoneme/word clustering on intermediate features—enabling explicit disentanglement of language and phoneme representations prior to masked prediction. This design overcomes the semantic aliasing bottleneck inherent in conventional multi-layer averaging. Under the CommonVoice self-supervised setting, the method reduces phoneme error rate (PER) and word error rate (WER) by 9.51% and 11.58%, respectively, over the baseline Data2vec. In weakly supervised scenarios, PER further improves by 18.09%, demonstrating both effectiveness and generalizability of the disentangled modeling approach.
📝 Abstract
Data2vec is a self-supervised learning (SSL) approach that employs a teacher-student architecture for contextual representation learning via masked prediction, demonstrating remarkable performance in monolingual ASR. Previous studies have revealed that data2vec's shallow layers capture speaker and language information, middle layers encode phoneme and word features, while deep layers are responsible for reconstruction. Language and phoneme features are crucial for multilingual ASR. However, data2vec's masked representation generation relies on multi-layer averaging, inevitably coupling these features. To address this limitation, we propose a decoupling quantization based data2vec (DQ-Data2vec) for multilingual ASR, which includes a data2vec backbone and two improved online K-means quantizers. Our core idea is using the K-means quantizer with specified cluster numbers to decouple language and phoneme information for masked prediction. Specifically, in the language quantization, considering that the number of languages is significantly different from other irrelevant features (e.g., speakers), we assign the cluster number to match the number of languages, explicitly decoupling shallow layers' language-related information from irrelevant features. This strategy is also applied to decoupling middle layers' phoneme and word features. In a self-supervised scenario, experiments on the CommonVoice dataset demonstrate that DQ-Data2vec achieves a relative reduction of 9.51% in phoneme error rate (PER) and 11.58% in word error rate (WER) compared to data2vec and UniData2vec. Moreover, in a weakly-supervised scenario incorporating language labels and high-resource language text labels, the relative reduction is 18.09% and 1.55%, respectively.