🤖 AI Summary
Training deep neural networks is often hindered by highly non-convex and ill-conditioned optimization landscapes, leading to unstable convergence. To address this challenge, this work proposes a multilevel deep learning framework that incrementally introduces residual blocks, where each level fits only the residual error left by the previously frozen network, thereby enabling a hierarchical and interpretable approximation process. Leveraging operator-theoretic analysis, the study provides the first rigorous theoretical guarantee for such a stagewise training strategy, proving that a fixed-width ReLU multilevel architecture ensures strictly decreasing residuals that uniformly converge to zero. Moreover, the framework is shown to uniformly approximate any continuous function. Numerical experiments corroborate the effectiveness of the residual decay across levels and demonstrate markedly improved training stability.
📝 Abstract
We study multigrade deep learning (MGDL) as a principled framework for structured error refinement in deep neural networks. While the approximation power of neural networks is now relatively well understood, training very deep architectures remains challenging due to highly non-convex and often ill-conditioned optimization landscapes. In contrast, for relatively shallow networks, most notably one-hidden-layer $\texttt{ReLU}$ models, training admits convex reformulations with global guarantees, motivating learning paradigms that improve stability while scaling to depth. MGDL builds upon this insight by training deep networks grade by grade: previously learned grades are frozen, and each new residual block is trained solely to reduce the remaining approximation error, yielding an interpretable and stable hierarchical refinement process. We develop an operator-theoretic foundation for MGDL and prove that, for any continuous target function, there exists a fixed-width multigrade $\texttt{ReLU}$ scheme whose residuals decrease strictly across grades and converge uniformly to zero. To the best of our knowledge, this work provides the first rigorous theoretical guarantee that grade-wise training yields provable vanishing approximation error in deep networks. Numerical experiments further illustrate the theoretical results.