Exploring Prediction Targets in Masked Pre-Training for Speech Foundation Models

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the design of prediction targets in speech foundation models (e.g., HuBERT), specifically investigating how target granularity—ranging from acoustic details to high-level semantics—affects downstream representation quality. We identify inherent limitations in prevailing discrete-unit targets (e.g., clustering-based units) regarding semantic abstraction and information completeness. To overcome these, we propose a multi-level, information-rich target construction framework that jointly incorporates phonemic, prosodic, and continuous acoustic features, seamlessly integrated into the HuBERT architecture. Comprehensive cross-task transfer evaluations—including speaker identification, automatic speech recognition (ASR), and speech enhancement—demonstrate consistent and significant improvements in representation quality across multiple benchmarks. Crucially, this study establishes, for the first time, an interpretable linkage between principled prediction target design and downstream generalization capability, thereby introducing a novel paradigm for self-supervised speech pretraining.

Technology Category

Application Category

📝 Abstract
Speech foundation models, such as HuBERT and its variants, are pre-trained on large amounts of unlabeled speech data and then used for a range of downstream tasks. These models use a masked prediction objective, where the model learns to predict information about masked input segments from the unmasked context. The choice of prediction targets in this framework impacts their performance on downstream tasks. For instance, models pre-trained with targets that capture prosody learn representations suited for speaker-related tasks, while those pre-trained with targets that capture phonetics learn representations suited for content-related tasks. Moreover, prediction targets can differ in the level of detail they capture. Models pre-trained with targets that encode fine-grained acoustic features perform better on tasks like denoising, while those pre-trained with targets focused on higher-level abstractions are more effective for content-related tasks. Despite the importance of prediction targets, the design choices that affect them have not been thoroughly studied. This work explores the design choices and their impact on downstream task performance. Our results indicate that the commonly used design choices for HuBERT can be suboptimal. We propose approaches to create more informative prediction targets and demonstrate their effectiveness through improvements across various downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Speech Model Optimization
HuBERT
Masked Audio Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved Self-supervised Learning
Speech Models Optimization
Masked Prediction Enhancement
🔎 Similar Papers
No similar papers found.