🤖 AI Summary
This work addresses the lack of theoretical foundations for large language model (LLM) training by proposing the Neural Thermodynamics Law (NTL) framework—the first rigorous correspondence between LLM training dynamics and classical thermodynamics, established under the valley-shaped loss landscape assumption. Methodologically, it models loss geometry, characterizes parameter evolution via stochastic differential equations, and integrates gradient flow mapping with the equipartition theorem to systematically derive measurable thermodynamic quantities—including temperature, entropy, and heat capacity—as well as formal analogues of the three laws of thermodynamics. The NTL framework provides physically grounded, interpretable guidance for learning rate scheduling; its thermodynamics-inspired heuristics significantly improve convergence stability and final performance across multi-task LLM training. This work marks a paradigm shift in deep learning dynamics analysis—from empirical heuristics toward principles rooted in statistical physics.
📝 Abstract
Beyond neural scaling laws, little is known about the laws underlying large language models (LLMs). We introduce Neural Thermodynamic Laws (NTL) -- a new framework that offers fresh insights into LLM training dynamics. On the theoretical side, we demonstrate that key thermodynamic quantities (e.g., temperature, entropy, heat capacity, thermal conduction) and classical thermodynamic principles (e.g., the three laws of thermodynamics and the equipartition theorem) naturally emerge under river-valley loss landscape assumptions. On the practical side, this scientific perspective yields intuitive guidelines for designing learning rate schedules.