Disaggregation Reveals Hidden Training Dynamics: The Case of Agreement Attraction

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Language models frequently exhibit grammatical errors in specific syntactic environments, suggesting stage-like characteristics in their acquisition of syntax. Method: We construct a fine-grained, controllable dataset and integrate psycholinguistic paradigms to systematically track error distributions and their evolution throughout the entire training trajectory. Contribution/Results: We find that early in training, models rely on word frequency and local heuristics, lacking abstract grammatical knowledge; as training progresses, they gradually develop structure-sensitive generalization capabilities. Crucially, we identify a key transition point marking the shift from surface-level statistical learning to deep syntactic generalization. This work provides the first empirical characterization of the staged, dynamic mechanism underlying syntactic learning in language models, offering an interpretable and reproducible analytical framework for investigating the evolution of internal linguistic representations.

Technology Category

Application Category

📝 Abstract
Language models generally produce grammatical text, but they are more likely to make errors in certain contexts. Drawing on paradigms from psycholinguistics, we carry out a fine-grained analysis of those errors in different syntactic contexts. We demonstrate that by disaggregating over the conditions of carefully constructed datasets and comparing model performance on each over the course of training, it is possible to better understand the intermediate stages of grammatical learning in language models. Specifically, we identify distinct phases of training where language model behavior aligns with specific heuristics such as word frequency and local context rather than generalized grammatical rules. We argue that taking this approach to analyzing language model behavior more generally can serve as a powerful tool for understanding the intermediate learning phases, overall training dynamics, and the specific generalizations learned by language models.
Problem

Research questions and friction points this paper is trying to address.

Analyzing grammatical errors in language models across syntactic contexts
Identifying intermediate learning phases through disaggregated training data
Revealing heuristic-based behaviors versus generalized grammatical rule learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disaggregating conditions to analyze training dynamics
Identifying heuristic-based learning phases in models
Comparing performance across constructed syntactic datasets
🔎 Similar Papers
No similar papers found.