🤖 AI Summary
To address the insufficient robustness of speech foundation models for automatic speech recognition (ASR) in noisy environments, this paper introduces the VICReg statistical regularization mechanism into the HuBERT pretraining framework. By jointly constraining variance, invariance, and covariance in the representation space, the method explicitly optimizes the statistical properties of noisy speech embeddings, thereby enhancing generalization to unseen noise types. Crucially, it operates in a fully self-supervised manner—requiring neither noise labels nor paired clean-noisy data—and achieves robust representation learning directly during pretraining. Evaluated on LibriSpeech, the proposed approach yields relative word error rate (WER) reductions of 23.3% on test-clean and 13.2% on test-other compared to the standard HuBERT baseline. These results demonstrate substantially improved cross-noise-scenario adaptability and stability, establishing a new state-of-the-art in unsupervised robust ASR pretraining.
📝 Abstract
Noise robustness in speech foundation models (SFMs) has been a critical challenge, as most models are primarily trained on clean data and experience performance degradation when the models are exposed to noisy speech. To address this issue, we propose HuBERT-VIC, a noise-robust SFM with variance, in-variance, and covariance regularization (VICReg) objectives. These objectives adjust the statistics of noisy speech representations, enabling the model to capture diverse acoustic characteristics and improving the generalization ability across different types of noise. When applied to HuBERT, our model shows relative performance improvements of 23.3% on LibriSpeech test-clean and 13.2% on test-other, compared to the baseline model pre-trained on noisy speech.