🤖 AI Summary
This work investigates the observed decline in human reading behavior prediction capability (PPP) of large language models beyond a critical inflection point during pretraining. Method: Leveraging correlation analysis, causal intervention, attention interpretability, and training trajectory tracking, we identify a phase transition—characterized by the abrupt emergence of specialized attention heads—as the root cause of the PPP inflection. Contribution/Results: We establish, for the first time, a causal link between this phase transition and the degradation of PPP: the transition fundamentally reshapes learning dynamics, leading to progressive erosion of cognitive alignment with human reading patterns. Crucially, post-transition models do not develop harmful attention patterns; rather, their learning trajectories irreversibly diverge from empirically observed human reading regularities. This study is the first to attribute the PPP inflection to an intrinsic pretraining phase transition, uncovering a fundamental tension between capability emergence and cognitive alignment—and thereby providing theoretical grounding for controllable alignment-aware pretraining.
📝 Abstract
LMs' alignment with human reading behavior (i.e. psychometric predictive power; PPP) is known to improve during pretraining up to a tipping point, beyond which it either plateaus or degrades. Various factors, such as word frequency, recency bias in attention, and context size, have been theorized to affect PPP, yet there is no current account that explains why such a tipping point exists, and how it interacts with LMs' pretraining dynamics more generally. We hypothesize that the underlying factor is a pretraining phase transition, characterized by the rapid emergence of specialized attention heads. We conduct a series of correlational and causal experiments to show that such a phase transition is responsible for the tipping point in PPP. We then show that, rather than producing attention patterns that contribute to the degradation in PPP, phase transitions alter the subsequent learning dynamics of the model, such that further training keeps damaging PPP.