Crosscoding Through Time: Tracking Emergence & Consolidation Of Linguistic Representations Throughout LLM Pretraining

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The emergence mechanism of abstract linguistic capabilities—such as irregular plural subject identification—in large language model (LLM) pretraining remains poorly understood, and conventional benchmarks fail to capture the dynamic evolution of internal representations. To address this, we propose a sparse cross-encoder framework that aligns hidden-layer features across training checkpoints, enabling fine-grained, concept-level tracking of linguistic feature development. We further introduce the Relative Indirect Effects (RelIE) metric to quantify stage-wise changes in the causal importance of features. Our method supports architecture-agnostic, scalable analysis and successfully identifies the emergence, consolidation, and degradation phases of linguistic features across multiple open-source LLMs. Experiments demonstrate the framework’s effectiveness and generalizability in uncovering the developmental trajectories of linguistic capabilities and enhancing model interpretability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) learn non-trivial abstractions during pretraining, like detecting irregular plural noun subjects. However, it is not well understood when and how specific linguistic abilities emerge as traditional evaluation methods such as benchmarking fail to reveal how models acquire concepts and capabilities. To bridge this gap and better understand model training at the concept level, we use sparse crosscoders to discover and align features across model checkpoints. Using this approach, we track the evolution of linguistic features during pretraining. We train crosscoders between open-sourced checkpoint triplets with significant performance and representation shifts, and introduce a novel metric, Relative Indirect Effects (RelIE), to trace training stages at which individual features become causally important for task performance. We show that crosscoders can detect feature emergence, maintenance, and discontinuation during pretraining. Our approach is architecture-agnostic and scalable, offering a promising path toward more interpretable and fine-grained analysis of representation learning throughout pretraining.
Problem

Research questions and friction points this paper is trying to address.

Track emergence of linguistic features during pretraining
Understand when specific linguistic abilities develop in LLMs
Align features across model checkpoints using crosscoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse crosscoders track feature evolution
Relative Indirect Effects metric traces causal importance
Architecture-agnostic scalable interpretable analysis method
🔎 Similar Papers
No similar papers found.