🤖 AI Summary
This paper identifies a “percolation effect” in dropout training of deep neural networks: when the connection removal rate exceeds a critical threshold, input-output pathways disintegrate, leading to loss of representational capacity and training collapse.
Method: Leveraging rigorous correspondence between dropout and statistical-physics percolation theory, we derive the critical condition for training failure—originating from a percolation phase transition—in bias-free networks, and prove its universality for networks with biases. Our approach integrates percolation modeling, topological network analysis, analytical derivation of phase-transition thresholds, and empirical validation via SGD dynamics.
Results: We quantitatively characterize the critical dependence of connectivity on network depth, width, and dropout rate. This yields an interpretable theoretical mechanism for training failure under specific architectural constraints, bridging a key gap in regularization theory by incorporating critical phenomena—a previously missing perspective in the mechanistic study of dropout.
📝 Abstract
In this work, we investigate the existence and effect of percolation in training deep Neural Networks (NNs) with dropout. Dropout methods are regularisation techniques for training NNs, first introduced by G. Hinton et al. (2012). These methods temporarily remove connections in the NN, randomly at each stage of training, and update the remaining subnetwork with Stochastic Gradient Descent (SGD). The process of removing connections from a network at random is similar to percolation, a paradigm model of statistical physics.
If dropout were to remove enough connections such that there is no path between the input and output of the NN, then the NN could not make predictions informed by the data. We study new percolation models that mimic dropout in NNs and characterise the relationship between network topology and this path problem. The theory shows the existence of a percolative effect in dropout. We also show that this percolative effect can cause a breakdown when training NNs without biases with dropout; and we argue heuristically that this breakdown extends to NNs with biases.