🤖 AI Summary
Prior to this work, systematic investigation of learning rate warmup strategies for large-scale speech-to-text (S2T) model training remained lacking. Method: We conduct a comparative analysis of linear, cosine, sub-exponential, and two-stage warmup schedules on state-of-the-art end-to-end architectures—including Conformer and Branchformer—using LibriSpeech. Contribution/Results: We identify, for the first time, that sub-exponential warmup is essential for stable and efficient large-scale S2T training. Empirical evaluation shows that elevated learning rates during warmup accelerate early convergence but do not improve final word error rate (WER). Building on these insights, we propose an optimized sub-exponential warmup strategy that achieves faster convergence while preserving WER performance. Our findings establish a reproducible, high-performance learning rate scheduling principle for large-model S2T training.
📝 Abstract
Training large-scale models presents challenges not only in terms of resource requirements but also in terms of their convergence. For this reason, the learning rate (LR) is often decreased when the size of a model is increased. Such a simple solution is not enough in the case of speech-to-text (S2T) trainings, where evolved and more complex variants of the Transformer architecture -- e.g., Conformer or Branchformer -- are used in light of their better performance. As a workaround, OWSM designed a double linear warmup of the LR, increasing it to a very small value in the first phase before updating it to a higher value in the second phase. While this solution worked well in practice, it was not compared with alternative solutions, nor was the impact on the final performance of different LR warmup schedules studied. This paper fills this gap, revealing that i) large-scale S2T trainings demand a sub-exponential LR warmup, and ii) a higher LR in the warmup phase accelerates initial convergence, but it does not boost final performance.