Quiet Feature Learning in Algorithmic Tasks

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the prevailing assumption that next-token prediction loss monotonically reflects learning progress in Transformer language models, by identifying non-power-law phase transitions—prolonged plateaus abruptly followed by sharp drops—in validation loss across ten fundamental algorithmic tasks. Method: Leveraging representation probing, causal feature ablation, and rigorous phase-transition detection, we analyze internal model dynamics during training. Contribution/Results: We discover a “silent-to-loud” feature evolution mechanism: critical sparse features are first acquired implicitly and remain latent for extended periods, then collectively coalesce to trigger abrupt performance jumps. This demonstrates that validation loss fails to reliably capture underlying representational qualitative shifts. The phase transition is stably reproduced across all ten tasks; causal key features are precisely localized; and targeted perturbation of a single such feature induces catastrophic performance collapse—confirming their necessity and sufficiency for task mastery.

Technology Category

Application Category

📝 Abstract
We train Transformer-based language models on ten foundational algorithmic tasks and observe pronounced phase transitions in their loss curves that deviate from established power-law scaling trends. Over large ranges of compute, the validation loss barely improves, then abruptly decreases. Probing the models' internal representations reveals the learning of quiet features during the stagnant phase, followed by sudden acquisition of loud features that coincide with the sharp drop in loss. Our ablation experiments show that disrupting a single learned feature can dramatically degrade performance, providing evidence of their causal role in task performance. These findings challenge the prevailing assumption that next-token predictive loss reliably tracks incremental progress; instead, key internal features may be developing below the surface until they coalesce, triggering a rapid performance gain.
Problem

Research questions and friction points this paper is trying to address.

Understanding phase transitions in Transformer loss curves during algorithmic task training
Investigating quiet vs loud feature learning dynamics in language models
Challenging assumption that next-token loss reliably reflects incremental learning progress
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer models trained on algorithmic tasks
Quiet features learned before sudden performance gain
Ablation shows causal role of single features
🔎 Similar Papers
No similar papers found.
P
Prudhviraj Naidu
Department of Computer Science, UC San Diego
Zixian Wang
Zixian Wang
University of California, San Diego
Leon Bergen
Leon Bergen
Associate Professor, UCSD
Computational Linguistics
R
R. Paturi
Department of Computer Science, UC San Diego