Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget

📅 2024-09-09
🏛️ Spoken Language Technology Workshop
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the efficient training of self-supervised speech foundation models under constrained computational budgets, systematically investigating the co-optimization of model architecture, parameter count, and pretraining dataset scale. Through controlled benchmarking, model scaling analysis, and iterative strategy evaluation, we find that lightweight architectures substantially outperform conventional small models under identical compute and parameter budgets; we further demonstrate that data scale is indispensable for performance gains and quantify the optimal trade-off between model and data scale. Key contributions include: (1) a principled, compute-aware guideline for selecting optimal model size; (2) empirical evidence that data augmentation cannot substitute for genuine data expansion; and (3) significant downstream task performance improvements achieved under fixed compute constraints. Our findings provide actionable insights for resource-efficient development of speech foundation models.

Technology Category

Application Category

📝 Abstract
Despite their impressive success, training foundation models remains computationally costly. This paper investigates how to efficiently train speech foundation models with self-supervised learning (SSL) under a limited compute budget. We examine critical factors in SSL that impact the budget, including model architecture, model size, and data size. Our goal is to make analytical steps toward understanding the training dynamics of speech foundation models. We benchmark SSL objectives in an entirely comparable setting and find that other factors contribute more significantly to the success of SSL. Our results show that slimmer model architectures outperform common small architectures under the same compute and parameter budget. We demonstrate that the size of the pre-training data remains crucial, even with data augmentation during SSL training, as performance suffers when iterating over limited data. Finally, we identify a trade-off between model size and data size, highlighting an optimal model size for a given compute budget.
Problem

Research questions and friction points this paper is trying to address.

Efficient training of speech foundation models.
Impact of SSL factors on compute budget.
Trade-off between model size and data size.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for speech models
Slimmer architectures optimize compute budget
Trade-off between model and data size
🔎 Similar Papers
No similar papers found.