π€ AI Summary
This work addresses the mechanistic underpinnings of SGD training dynamics and emergent phenomena in large-scale deep learning models. We propose the first entropy-force theoretical framework grounded in parameter symmetry and entropy landscape geometry. Using stochastic differential equations, information geometry, and symmetry analysis, we rigorously characterize how SGDβs discrete-time updates and intrinsic noise jointly generate an *entropy force* that drives representation learning and spontaneously breaks continuous symmetries. The theory formally proves the βBraunβPlattβ representation hypothesis and unifies the sharpness-flatness optimization paradox. Empirically, we validate a thermodynamically inspired equipartition of gradient energy across dimensions and establish a universal thermodynamic mechanism for representation alignment. Our framework provides a computationally tractable and empirically verifiable thermodynamic foundation for understanding emergence in large models.
π Abstract
With the rapid discovery of emergent phenomena in deep learning and large language models, explaining and understanding their cause has become an urgent need. Here, we propose a rigorous entropic-force theory for understanding the learning dynamics of neural networks trained with stochastic gradient descent (SGD) and its variants. Building on the theory of parameter symmetries and an entropic loss landscape, we show that representation learning is crucially governed by emergent entropic forces arising from stochasticity and discrete-time updates. These forces systematically break continuous parameter symmetries and preserve discrete ones, leading to a series of gradient balance phenomena that resemble the equipartition property of thermal systems. These phenomena, in turn, (a) explain the universal alignment of neural representations between AI models and lead to a proof of the Platonic Representation Hypothesis, and (b) reconcile the seemingly contradictory observations of sharpness- and flatness-seeking behavior of deep learning optimization. Our theory and experiments demonstrate that a combination of entropic forces and symmetry breaking is key to understanding emergent phenomena in deep learning.