🤖 AI Summary
Cross-entropy scaling laws break down in ultra-large language models, where loss decreases markedly slower than predicted by power-law scaling.
Method: We propose a novel ternary decomposition of cross-entropy into error entropy, self-alignment, and confidence—formally separating the contributions of prediction uncertainty, internal consistency, and model certainty. Using empirical analysis across 32 models spanning five orders of magnitude and multiple datasets, we quantify each component’s scaling behavior.
Contribution/Results: We establish that only error entropy obeys a robust power-law scaling law; self-alignment and confidence remain approximately constant with scale. This explains the origin of cross-entropy scaling distortion at large scales: as model size increases, error entropy—dominant in small models—decays rapidly, causing overall cross-entropy to deviate from power-law behavior. We derive a more accurate, interpretable error-entropy scaling law, providing a theoretically grounded, predictive framework for large-model training and evaluation.
📝 Abstract
The cross-entropy scaling law has long served as a key tool for guiding the development of large language models. It shows that cross-entropy loss decreases in a predictable power-law rate as the model size increases. However, recent evidence indicates that this law breaks down at very large scales: the loss decreases more slowly than expected, which causes significant trouble for developing large language models. In this paper, we hypothesize that the root cause lies in the fact that cross-entropy itself does not truly scale; instead, only one of its hidden components does. To investigate this, we introduce a novel decomposition of cross-entropy into three parts: Error-Entropy, Self-Alignment, and Confidence. We show both theoretically and empirically that this decomposition precisely captures the training dynamics and optimization objectives. Through extensive experiments on multiple datasets and 32 models spanning five orders of magnitude in size, we find that only error-entropy follows a robust power-law scaling, while the other two terms remain largely invariant. Moreover, error-entropy constitutes the dominant share of cross-entropy in small models but diminishes in proportion as models grow larger. This explains why the cross-entropy scaling law appears accurate at small scales but fails at very large ones. Our findings establish the error-entropy scaling law as a more accurate description of model behavior. We believe it will have wide applications in the training, understanding, and future development of large language models.