How do language models learn facts? Dynamics, curricula and hallucinations

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the dynamic mechanisms underlying factual knowledge acquisition during large language model (LLM) pretraining and the origins of hallucinations. To this end, we introduce a synthetic fact recall task and integrate longitudinal training trajectory monitoring, attention mechanism analysis, and controlled distribution experiments to systematically characterize the entire knowledge acquisition process. We identify, for the first time, a three-stage dynamic evolution of factual learning—strictly aligned with the emergence of specific attention circuits. We further demonstrate that data distribution imbalance significantly shortens the learning plateau phase; that hallucinations arise early in training, concurrently with factual knowledge acquisition—not as a late-stage artifact; and that supervised fine-tuning induces catastrophic interference, degrading previously internalized parametric memory. These findings provide novel mechanistic insights and empirical evidence for understanding LLM knowledge representation, optimizing training paradigms, and mitigating hallucination.

Technology Category

Application Category

📝 Abstract
Large language models accumulate vast knowledge during pre-training, yet the dynamics governing this acquisition remain poorly understood. This work investigates the learning dynamics of language models on a synthetic factual recall task, uncovering three key findings: First, language models learn in three phases, exhibiting a performance plateau before acquiring precise factual knowledge. Mechanistically, this plateau coincides with the formation of attention-based circuits that support recall. Second, the training data distribution significantly impacts learning dynamics, as imbalanced distributions lead to shorter plateaus. Finally, hallucinations emerge simultaneously with knowledge, and integrating new knowledge into the model through fine-tuning is challenging, as it quickly corrupts its existing parametric memories. Our results emphasize the importance of data distribution in knowledge acquisition and suggest novel data scheduling strategies to accelerate neural network training.
Problem

Research questions and friction points this paper is trying to address.

Understand language models' learning dynamics on factual recall
Examine impact of data distribution on knowledge acquisition
Investigate challenges of integrating new knowledge without corruption
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models learn facts in three distinct phases
Attention circuits form during performance plateaus
Data distribution impacts learning dynamics significantly
🔎 Similar Papers
No similar papers found.