🤖 AI Summary
It remains unclear whether loss reduction during continued pretraining genuinely reflects knowledge acquisition. This work frames the process as factual knowledge learning, introducing a distribution-matched benchmark of factual documents and embedding diagnostic probes to track fine-grained dynamics of factual knowledge retention and out-of-domain generalization throughout training. The study reveals a systematic dissociation between loss optimization and actual knowledge learning: knowledge acquisition is non-monotonic, exhibits low consolidation rates, and depends heavily on prior exposure. Through knowledge circuit analysis, the authors uncover for the first time that rapid reconfiguration of knowledge pathways during training creates narrow learning windows and induces early forgetting. Experiments across three instruction-tuned large language models demonstrate that training should be halted based on task-level learning dynamics rather than loss values alone.
📝 Abstract
Continual Pre-Training (CPT) is widely used for acquiring and updating factual knowledge in LLMs. This practice treats loss as a proxy for knowledge learning, while offering no grounding into how it changes during training. We study CPT as a knowledge learning process rather than a solely optimization problem. We construct a controlled, distribution-matched benchmark of factual documents and interleave diagnostic probes directly into the CPT loop, enabling epoch-level measurement of knowledge acquisition dynamics and changes in Out-Of-Domain (OOD) general skills (e.g., math). We further analyze how CPT reshapes knowledge circuits during training. Across three instruction-tuned LLMs and multiple CPT strategies, optimization and learning systematically diverge as loss decreases monotonically while factual learning is unstable and non-monotonic. Acquired facts are rarely consolidated, learning is strongly conditioned on prior exposure, and OOD performance degrades from early epochs. Circuit analysis reveals rapid reconfiguration of knowledge pathways across epochs, providing an explanation for narrow acquisition windows and systematic forgetting. These results show that loss optimization is misaligned with learning progress in CPT and motivate evaluation of stopping criteria based on task-level learning dynamics.