Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks

📅 2022-05-26
🏛️ CSIAM Transactions on Applied Mathematics
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the relationship between loss landscapes of deep neural networks (DNNs) and shallower networks. We propose and formalize the *Depth Embedding Principle*: the loss landscape of a deeper network strictly contains all critical points of any shallower network. To this end, we introduce the *critical lifting operator*—the first operator that lifts critical points while preserving network output—and establish a geometric mapping across parameter spaces of differing depths. Leveraging critical point theory, differential geometry, and loss landscape modeling, we prove: (1) local minima of shallow networks become strict saddle points in deeper networks; (2) batch normalization mitigates redundant critical manifolds by suppressing layer-wise linearization; and (3) increasing dataset size contracts the lifted critical manifold, accelerating convergence. Our framework unifies explanations for the depth advantage, BN’s acceleration mechanism, and data-scale effects, offering a novel geometric perspective on deep learning theory.
📝 Abstract
Understanding the relation between deep and shallow neural networks is extremely important for the theoretical study of deep learning. In this work, we discover an embedding principle in depth that loss landscape of an NN"contains"all critical points of the loss landscapes for shallower NNs. The key tool for our discovery is the critical lifting operator proposed in this work that maps any critical point of a network to critical manifolds of any deeper network while preserving the outputs. This principle provides new insights to many widely observed behaviors of DNNs. Regarding the easy training of deep networks, we show that local minimum of an NN can be lifted to strict saddle points of a deeper NN. Regarding the acceleration effect of batch normalization, we demonstrate that batch normalization helps avoid the critical manifolds lifted from shallower NNs by suppressing layer linearization. We also prove that increasing training data shrinks the lifted critical manifolds, which can result in acceleration of training as demonstrated in experiments. Overall, our discovery of the embedding principle in depth uncovers the depth-wise hierarchical structure of deep learning loss landscape, which serves as a solid foundation for the further study about the role of depth for DNNs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing loss landscape relation between deep and shallow neural networks
Exploring embedding principle for critical points in deeper networks
Investigating depth's role in training dynamics and optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Critical lifting operator maps shallow to deep critical points
Batch normalization avoids lifted critical manifolds by suppressing linearization
More training data shrinks lifted critical manifolds, accelerating training
🔎 Similar Papers
No similar papers found.
Zhiwei Bai
Zhiwei Bai
Shanghai Jiao Tong University
Machine Learning;Deep Learning
T
Tao Luo
School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, P.R. China., CMA-Shanghai, Shanghai Artificial Intelligence Laboratory, Shanghai 200240, P.R. China
Z
Z. Xu
School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, P.R. China.
Yaoyu Zhang
Yaoyu Zhang
Shanghai Jiao Tong University
Deep Learning Theory