How Do Large Language Models Learn Concepts During Continual Pre-Training?

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the mechanisms of concept acquisition, retention, and forgetting in large language models during continual pretraining, as well as the interference and synergy among concepts. By constructing internal “concept circuits” aligned with specific concepts and integrating graph-structured metrics with temporal analysis, the work reveals—for the first time at the circuit level—the staged dynamics of concept learning. The findings demonstrate a negative correlation between learning gain and forgetting intensity, significant interference induced by semantic similarity, and concept-dependent variability in knowledge transfer efficacy. Experimental results validate that concept circuits effectively capture the dynamic evolution of concepts and enable quantification of inter-concept interactions and their impact on learning performance.

Technology Category

Application Category

📝 Abstract
Human beings primarily understand the world through concepts (e.g., dog), abstract mental representations that structure perception, reasoning, and learning. However, how large language models (LLMs) acquire, retain, and forget such concepts during continual pretraining remains poorly understood. In this work, we study how individual concepts are acquired and forgotten, as well as how multiple concepts interact through interference and synergy. We link these behavioral dynamics to LLMs'internal Concept Circuits, computational subgraphs associated with specific concepts, and incorporate Graph Metrics to characterize circuit structure. Our analysis reveals: (1) LLMs concept circuits provide a non-trivial, statistically significant signal of concept learning and forgetting; (2) Concept circuits exhibit a stage-wise temporal pattern during continual pretraining, with an early increase followed by gradual decrease and stabilization; (3) concepts with larger learning gains tend to exhibit greater forgetting under subsequent training; (4) semantically similar concepts induce stronger interference than weakly related ones; (5) conceptual knowledge differs in their transferability, with some significantly facilitating the learning of others. Together, our findings offer a circuit-level view of concept learning dynamics and inform the design of more interpretable and robust concept-aware training strategies for LLMs.
Problem

Research questions and friction points this paper is trying to address.

concept learning
continual pretraining
large language models
concept forgetting
concept interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept Circuits
Continual Pre-Training
Graph Metrics
Concept Interference
Concept Transferability
🔎 Similar Papers
No similar papers found.