🤖 AI Summary
This work addresses the high cost of hyperparameter tuning during model upscaling and the lack of theoretical guarantees in existing small-to-large model extrapolation methods. The authors propose a general upscaling framework grounded in μP (maximal update parametrization) theory and architectural equivalence across arbitrary widths, establishing a rigorous equivalence between a narrow base model and its widened counterpart. This equivalence enables efficient knowledge transfer and accelerated training. By extending μTransfer to support upscaling scenarios, the method provides, for the first time, a theoretically sound foundation for hyperparameter transfer from small to large models. Extensive experiments across multiple real-world datasets and mainstream architectures demonstrate the approach’s effectiveness, significantly reducing training costs and improving convergence speed for large models.
📝 Abstract
Modern large-scale neural networks are often trained and released in multiple sizes to accommodate diverse inference budgets. To improve efficiency, recent work has explored model upscaling: initializing larger models from trained smaller ones in order to transfer knowledge and accelerate convergence. However, this method can be sensitive to hyperparameters that need to be tuned at the target upscaled model size, which is prohibitively costly to do directly. It remains unclear whether the most common workaround -- tuning on smaller models and extrapolating via hyperparameter scaling laws -- is still sound when using upscaling. We address this with principled approaches to upscaling with respect to model widths and efficiently tuning hyperparameters in this setting. First, motivated by $\mu$P and any-dimensional architectures, we introduce a general upscaling method applicable to a broad range of architectures and optimizers, backed by theory guaranteeing that models are equivalent to their widened versions and allowing for rigorous analysis of infinite-width limits. Second, we extend the theory of $\mu$Transfer to a hyperparameter transfer technique for models upscaled using our method and empirically demonstrate that this method is effective on realistic datasets and architectures.