🤖 AI Summary
This paper challenges the prevailing assumption that next-token prediction loss monotonically reflects learning progress in Transformer language models, by identifying non-power-law phase transitions—prolonged plateaus abruptly followed by sharp drops—in validation loss across ten fundamental algorithmic tasks.
Method: Leveraging representation probing, causal feature ablation, and rigorous phase-transition detection, we analyze internal model dynamics during training.
Contribution/Results: We discover a “silent-to-loud” feature evolution mechanism: critical sparse features are first acquired implicitly and remain latent for extended periods, then collectively coalesce to trigger abrupt performance jumps. This demonstrates that validation loss fails to reliably capture underlying representational qualitative shifts. The phase transition is stably reproduced across all ten tasks; causal key features are precisely localized; and targeted perturbation of a single such feature induces catastrophic performance collapse—confirming their necessity and sufficiency for task mastery.
📝 Abstract
We train Transformer-based language models on ten foundational algorithmic tasks and observe pronounced phase transitions in their loss curves that deviate from established power-law scaling trends. Over large ranges of compute, the validation loss barely improves, then abruptly decreases. Probing the models' internal representations reveals the learning of quiet features during the stagnant phase, followed by sudden acquisition of loud features that coincide with the sharp drop in loss. Our ablation experiments show that disrupting a single learned feature can dramatically degrade performance, providing evidence of their causal role in task performance. These findings challenge the prevailing assumption that next-token predictive loss reliably tracks incremental progress; instead, key internal features may be developing below the surface until they coalesce, triggering a rapid performance gain.