The Warmup Dilemma: How Learning Rate Strategies Impact Speech-to-Text Model Convergence

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior to this work, systematic investigation of learning rate warmup strategies for large-scale speech-to-text (S2T) model training remained lacking. Method: We conduct a comparative analysis of linear, cosine, sub-exponential, and two-stage warmup schedules on state-of-the-art end-to-end architectures—including Conformer and Branchformer—using LibriSpeech. Contribution/Results: We identify, for the first time, that sub-exponential warmup is essential for stable and efficient large-scale S2T training. Empirical evaluation shows that elevated learning rates during warmup accelerate early convergence but do not improve final word error rate (WER). Building on these insights, we propose an optimized sub-exponential warmup strategy that achieves faster convergence while preserving WER performance. Our findings establish a reproducible, high-performance learning rate scheduling principle for large-model S2T training.

Technology Category

Application Category

📝 Abstract
Training large-scale models presents challenges not only in terms of resource requirements but also in terms of their convergence. For this reason, the learning rate (LR) is often decreased when the size of a model is increased. Such a simple solution is not enough in the case of speech-to-text (S2T) trainings, where evolved and more complex variants of the Transformer architecture -- e.g., Conformer or Branchformer -- are used in light of their better performance. As a workaround, OWSM designed a double linear warmup of the LR, increasing it to a very small value in the first phase before updating it to a higher value in the second phase. While this solution worked well in practice, it was not compared with alternative solutions, nor was the impact on the final performance of different LR warmup schedules studied. This paper fills this gap, revealing that i) large-scale S2T trainings demand a sub-exponential LR warmup, and ii) a higher LR in the warmup phase accelerates initial convergence, but it does not boost final performance.
Problem

Research questions and friction points this paper is trying to address.

Investigates learning rate warmup impact on speech-to-text model convergence
Compares double linear warmup with alternative learning rate schedules
Determines optimal warmup strategy for large-scale speech-to-text training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Double linear warmup for learning rate
Sub-exponential warmup for large-scale S2T
Higher initial LR accelerates early convergence
🔎 Similar Papers
No similar papers found.
Marco Gaido
Marco Gaido
Fondazione Bruno Kessler
artificial intelligencenlpspeech translation
Sara Papi
Sara Papi
Researcher at FBK
Speech ProcessingSpeech TranslationMultimodal LLM
L
L. Bentivogli
Fondazione Bruno Kessler, Italy
A
A. Brutti
Fondazione Bruno Kessler, Italy
Mauro Cettolo
Mauro Cettolo
Researcher at Fondazione Bruno Kessler, Trento (Italy)
Natural Language ProcessingStatistical Machine TranslationAutomatic Speech Recognition
R
Roberto Gretter
Fondazione Bruno Kessler, Italy
M
M. Matassoni
Fondazione Bruno Kessler, Italy
M
Mohamed Nabih
Fondazione Bruno Kessler, Italy
M
Matteo Negri
Fondazione Bruno Kessler, Italy