DQ-Data2vec: Decoupling Quantization for Multilingual Speech Recognition

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address representational confusion arising from language-phoneme feature coupling in multilingual speech recognition, this paper proposes a disentangled quantization–enhanced Data2vec framework. The core innovation is the first integration of task-oriented dual online K-means quantizers into the Data2vec architecture: one performs language clustering on shallow features, while the other conducts phoneme/word clustering on intermediate features—enabling explicit disentanglement of language and phoneme representations prior to masked prediction. This design overcomes the semantic aliasing bottleneck inherent in conventional multi-layer averaging. Under the CommonVoice self-supervised setting, the method reduces phoneme error rate (PER) and word error rate (WER) by 9.51% and 11.58%, respectively, over the baseline Data2vec. In weakly supervised scenarios, PER further improves by 18.09%, demonstrating both effectiveness and generalizability of the disentangled modeling approach.

Technology Category

Application Category

📝 Abstract
Data2vec is a self-supervised learning (SSL) approach that employs a teacher-student architecture for contextual representation learning via masked prediction, demonstrating remarkable performance in monolingual ASR. Previous studies have revealed that data2vec's shallow layers capture speaker and language information, middle layers encode phoneme and word features, while deep layers are responsible for reconstruction. Language and phoneme features are crucial for multilingual ASR. However, data2vec's masked representation generation relies on multi-layer averaging, inevitably coupling these features. To address this limitation, we propose a decoupling quantization based data2vec (DQ-Data2vec) for multilingual ASR, which includes a data2vec backbone and two improved online K-means quantizers. Our core idea is using the K-means quantizer with specified cluster numbers to decouple language and phoneme information for masked prediction. Specifically, in the language quantization, considering that the number of languages is significantly different from other irrelevant features (e.g., speakers), we assign the cluster number to match the number of languages, explicitly decoupling shallow layers' language-related information from irrelevant features. This strategy is also applied to decoupling middle layers' phoneme and word features. In a self-supervised scenario, experiments on the CommonVoice dataset demonstrate that DQ-Data2vec achieves a relative reduction of 9.51% in phoneme error rate (PER) and 11.58% in word error rate (WER) compared to data2vec and UniData2vec. Moreover, in a weakly-supervised scenario incorporating language labels and high-resource language text labels, the relative reduction is 18.09% and 1.55%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Speech Recognition
Phonetic Identification
Machine Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

DQ-Data2vec
K-means algorithm
unsupervised learning
🔎 Similar Papers
No similar papers found.
Qijie Shao
Qijie Shao
Northwestern Polytechnical University
Speech RecognitionAccent/Dialect Recognition
L
Linhao Dong
Bytedance Speech, Beijing Bytedance Technology Co Ltd, Beijing 100098, China
Kun Wei
Kun Wei
School of Computer Science, Northwestern Polytechnical University
deep learningcompute sciencespeech
Sining Sun
Sining Sun
声绘未来(北京)科技有限公司
Machine learningrobust speech recognitionadversarial learningspeech enhancementbeamforming
L
Lei Xie
Audio, Speech and Language Processing Group (ASLP), School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China