Hyperparameter Transfer Laws for Non-Recurrent Multi-Path Neural Networks

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of learning rate transferability in deep non-recurrent, multi-path neural architectures—such as CNNs, ResNets, and Transformers—where scaling depth incurs prohibitive hyperparameter tuning costs. The authors introduce the concept of “effective depth,” defined via graph-theoretic shortest paths to uniformly characterize depth across diverse multi-path structures. By integrating maximal update parametrization (μP) with stability-based initialization theory, they uncover a universal power-law relationship: optimal learning rates decay with effective depth following a −3/2 exponent. This principle enables zero-shot learning rate transfer across varying depths and widths. Empirical validation across multiple mainstream architectures demonstrates high accuracy and strong generalization, substantially reducing the overhead of hyperparameter optimization.

Technology Category

Application Category

📝 Abstract
Deeper modern architectures are costly to train, making hyperparameter transfer preferable to expensive repeated tuning. Maximal Update Parametrization ($\mu$P) helps explain why many hyperparameters transfer across width. Yet depth scaling is less understood for modern architectures, whose computation graphs contain multiple parallel paths and residual aggregation. To unify various non-recurrent multi-path neural networks such as CNNs, ResNets, and Transformers, we introduce a graph-based notion of effective depth. Under stabilizing initializations and a maximal-update criterion, we show that the optimal learning rate decays with effective depth following a universal -3/2 power law. Here, the maximal-update criterion maximizes the typical one-step representation change at initialization without causing instability, and effective depth is the minimal path length from input to output, counting layers and residual additions. Experiments across diverse architectures confirm the predicted slope and enable reliable zero-shot transfer of learning rates across depths and widths, turning depth scaling into a predictable hyperparameter-transfer problem.
Problem

Research questions and friction points this paper is trying to address.

hyperparameter transfer
effective depth
learning rate scaling
multi-path neural networks
depth scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

effective depth
hyperparameter transfer
maximal update parametrization
learning rate scaling
multi-path neural networks
S
Shenxi Wu
Fudan University, Shanghai, China
H
Haosong Zhang
Fudan University, Shanghai, China
X
Xingjian Ma
Fudan University, Shanghai, China
S
Shirui Bian
Fudan University, Shanghai, China
Y
Yichi Zhang
New York University, New York, NY, USA
Xi Chen
Xi Chen
Professor at New York University
Business AnalyticsStatisticsOperations Management
Wei Lin
Wei Lin
Professor of Applied Mathematics, Fudan University
Nonlinear dynamical systemsComplex networksComputational systems biologyStochastic and random systemsArtificial Intellig