🤖 AI Summary
This work addresses the challenge of learning rate transferability in deep non-recurrent, multi-path neural architectures—such as CNNs, ResNets, and Transformers—where scaling depth incurs prohibitive hyperparameter tuning costs. The authors introduce the concept of “effective depth,” defined via graph-theoretic shortest paths to uniformly characterize depth across diverse multi-path structures. By integrating maximal update parametrization (μP) with stability-based initialization theory, they uncover a universal power-law relationship: optimal learning rates decay with effective depth following a −3/2 exponent. This principle enables zero-shot learning rate transfer across varying depths and widths. Empirical validation across multiple mainstream architectures demonstrates high accuracy and strong generalization, substantially reducing the overhead of hyperparameter optimization.
📝 Abstract
Deeper modern architectures are costly to train, making hyperparameter transfer preferable to expensive repeated tuning. Maximal Update Parametrization ($\mu$P) helps explain why many hyperparameters transfer across width. Yet depth scaling is less understood for modern architectures, whose computation graphs contain multiple parallel paths and residual aggregation. To unify various non-recurrent multi-path neural networks such as CNNs, ResNets, and Transformers, we introduce a graph-based notion of effective depth. Under stabilizing initializations and a maximal-update criterion, we show that the optimal learning rate decays with effective depth following a universal -3/2 power law. Here, the maximal-update criterion maximizes the typical one-step representation change at initialization without causing instability, and effective depth is the minimal path length from input to output, counting layers and residual additions. Experiments across diverse architectures confirm the predicted slope and enable reliable zero-shot transfer of learning rates across depths and widths, turning depth scaling into a predictable hyperparameter-transfer problem.