🤖 AI Summary
This work investigates “neuronal condensation”—a phenomenon in deep neural network training wherein neurons within the same layer spontaneously cluster into groups exhibiting similar outputs, with cluster count monotonically increasing during training. We systematically characterize its dynamics and origins via nonlinear dynamical analysis, geometric characterization of the loss landscape, and controlled experiments across diverse weight initializations and regularization schemes. We find that small-weight initialization and Dropout significantly accelerate condensation. Crucially, we establish, for the first time, empirical positive correlations between condensation degree and both generalization performance on standard benchmarks and inference accuracy of Transformer-based language models. These results position condensation as a quantifiable indicator of intrinsic structural organization and cognitive capacity in deep networks. Our study introduces a novel analytical paradigm and a measurable bridge for probing the internal mechanisms of deep learning.
📝 Abstract
In this paper, we provide an overview of a common phenomenon, condensation, observed during the nonlinear training of neural networks: During the nonlinear training of neural networks, neurons in the same layer tend to condense into groups with similar outputs. Empirical observations suggest that the number of condensed clusters of neurons in the same layer typically increases monotonically as training progresses. Neural networks with small weight initializations or Dropout optimization can facilitate this condensation process. We also examine the underlying mechanisms of condensation from the perspectives of training dynamics and the structure of the loss landscape. The condensation phenomenon offers valuable insights into the generalization abilities of neural networks and correlates to stronger reasoning abilities in transformer-based language models.