🤖 AI Summary
This work investigates the phase transition mechanism underlying the emergence of semantic abstraction—from concrete memorization to abstract representation—in Transformer language model training, focusing on natural linguistic structure rather than synthetic symbolic or arithmetic tasks. We propose TRACE, a diagnostic framework integrating geometric (curvature), information-theoretic (dimensional stability), and linguistic (syntactic/semantic accuracy) signals, and introduce ABSynth, a fully annotated, controllably complex synthetic corpus designed to precisely track abstraction evolution. Key findings: the phase transition occurs at the intersection of curvature collapse and dimensional stabilization, coinciding with synchronous sharp increases in syntactic and semantic accuracy; this transition is architecture-invariant, and while feed-forward networks primarily govern optimization stability, they do not alter the fundamental abstraction trajectory. Our study provides the first systematic characterization of language abstraction emergence as a dual geometric–information-theoretic process.
📝 Abstract
Modern transformer models exhibit phase transitions during training, distinct shifts from memorisation to abstraction, but the mechanisms underlying these transitions remain poorly understood. Prior work has often focused on endpoint representations or isolated signals like curvature or mutual information, typically in symbolic or arithmetic domains, overlooking the emergence of linguistic structure. We introduce TRACE (Tracking Representation Abstraction and Compositional Emergence), a diagnostic framework combining geometric, informational, and linguistic signals to detect phase transitions in Transformer-based LMs. TRACE leverages a frame-semantic data generation method, ABSynth, that produces annotated synthetic corpora with controllable complexity, lexical distributions, and structural entropy, while being fully annotated with linguistic categories, enabling precise analysis of abstraction emergence. Experiments reveal that (i) phase transitions align with clear intersections between curvature collapse and dimension stabilisation; (ii) these geometric shifts coincide with emerging syntactic and semantic accuracy; (iii) abstraction patterns persist across architectural variants, with components like feedforward networks affecting optimisation stability rather than fundamentally altering trajectories. This work advances our understanding of how linguistic abstractions emerge in LMs, offering insights into model interpretability, training efficiency, and compositional generalisation that could inform more principled approaches to LM development.