🤖 AI Summary
This work addresses the lack of a clear theoretical foundation underlying the HuBERT objective, which has hindered its performance improvement and refinement. By framing HuBERT within the Variational Predictive Coding (VPC) framework, this study provides the first unified theoretical interpretation of HuBERT, elucidating its intrinsic connections to other self-supervised speech representation methods such as APC, CPC, wav2vec, and BEST-RQ. Building on this theoretical insight, the authors propose two simple yet effective enhancement strategies that significantly boost model performance across four downstream tasks—phoneme classification, pitch tracking, speaker identification, and automatic speech recognition—thereby demonstrating the validity and generality of the proposed theoretical perspective.
📝 Abstract
Despite being the best known objective for learning speech representations, the HuBERT objective has not been further developed and improved. We argue that it is the lack of an underlying principle that stalls the development, and, in this paper, we show that predictive coding under a variational view is the principle behind the HuBERT objective. Due to its generality, our formulation provides opportunities to improve parameterization and optimization, and we show two simple modifications that bring immediate improvements to the HuBERT objective. In addition, the predictive coding formulation has tight connections to various other objectives, such as APC, CPC, wav2vec, and BEST-RQ. Empirically, the improvement in pre-training brings significant improvements to four downstream tasks: phone classification, f0 tracking, speaker recognition, and automatic speech recognition, highlighting the importance of the predictive coding interpretation.