TRACE for Tracking the Emergence of Semantic Representations in Transformers

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the phase transition mechanism underlying the emergence of semantic abstraction—from concrete memorization to abstract representation—in Transformer language model training, focusing on natural linguistic structure rather than synthetic symbolic or arithmetic tasks. We propose TRACE, a diagnostic framework integrating geometric (curvature), information-theoretic (dimensional stability), and linguistic (syntactic/semantic accuracy) signals, and introduce ABSynth, a fully annotated, controllably complex synthetic corpus designed to precisely track abstraction evolution. Key findings: the phase transition occurs at the intersection of curvature collapse and dimensional stabilization, coinciding with synchronous sharp increases in syntactic and semantic accuracy; this transition is architecture-invariant, and while feed-forward networks primarily govern optimization stability, they do not alter the fundamental abstraction trajectory. Our study provides the first systematic characterization of language abstraction emergence as a dual geometric–information-theoretic process.

Technology Category

Application Category

📝 Abstract
Modern transformer models exhibit phase transitions during training, distinct shifts from memorisation to abstraction, but the mechanisms underlying these transitions remain poorly understood. Prior work has often focused on endpoint representations or isolated signals like curvature or mutual information, typically in symbolic or arithmetic domains, overlooking the emergence of linguistic structure. We introduce TRACE (Tracking Representation Abstraction and Compositional Emergence), a diagnostic framework combining geometric, informational, and linguistic signals to detect phase transitions in Transformer-based LMs. TRACE leverages a frame-semantic data generation method, ABSynth, that produces annotated synthetic corpora with controllable complexity, lexical distributions, and structural entropy, while being fully annotated with linguistic categories, enabling precise analysis of abstraction emergence. Experiments reveal that (i) phase transitions align with clear intersections between curvature collapse and dimension stabilisation; (ii) these geometric shifts coincide with emerging syntactic and semantic accuracy; (iii) abstraction patterns persist across architectural variants, with components like feedforward networks affecting optimisation stability rather than fundamentally altering trajectories. This work advances our understanding of how linguistic abstractions emerge in LMs, offering insights into model interpretability, training efficiency, and compositional generalisation that could inform more principled approaches to LM development.
Problem

Research questions and friction points this paper is trying to address.

Understanding phase transitions in transformer training mechanisms
Tracking emergence of linguistic structure in LMs
Analyzing abstraction patterns across architectural variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

TRACE combines geometric, informational, linguistic signals
ABSynth generates annotated synthetic corpora
Analyzes phase transitions in Transformer models
🔎 Similar Papers
No similar papers found.
N
Nura Aljaafari
Department of Computer Science, University of Manchester, United Kingdom
Danilo S. Carvalho
Danilo S. Carvalho
University of Manchester
Artificial IntelligenceNatural Language Processing
A
Andr'e Freitas
Department of Computer Science, University of Manchester, United Kingdom; Idiap Research Institute, Switzerland; National Biomarker Centre, CRUK-MI, University of Manchester, United Kingdom