🤖 AI Summary
Large language models (LLMs) struggle to efficiently acquire and update dynamic, low-frequency knowledge during autoregressive pretraining. This work identifies their knowledge learning as implicit supervised learning. Method: We propose a knowledge-generalization-oriented optimization framework comprising: (1) structured data augmentation that explicitly encodes factual schemas to improve knowledge fidelity; (2) sharpness-aware minimization (SAM)-driven continual pretraining to heighten model sensitivity to subtle knowledge changes; and (3) joint instruction tuning to synergistically optimize knowledge transfer and generalization. Results: Extensive experiments demonstrate substantial improvements over state-of-the-art methods across diverse knowledge reasoning and timeliness-sensitive tasks, achieving new SOTA performance. Our approach establishes an interpretable, scalable paradigm for dynamic knowledge updating in LLMs.
📝 Abstract
Large language models (LLMs) are trained on enormous documents that contain extensive world knowledge. However, it is still not well-understood how knowledge is acquired via autoregressive pre-training. This lack of understanding greatly hinders effective knowledge learning, especially for continued pretraining on up-to-date information, as this evolving information often lacks diverse repetitions like foundational knowledge. In this paper, we focus on understanding and improving LLM knowledge learning. We found and verified that knowledge learning for LLMs can be deemed as an implicit supervised task hidden in the autoregressive pre-training objective. Our findings suggest that knowledge learning for LLMs would benefit from methods designed to improve generalization ability for supervised tasks. Based on our analysis, we propose the formatting-based data augmentation to grow in-distribution samples, which does not present the risk of altering the facts embedded in documents as text paraphrasing. We also introduce sharpness-aware minimization as an effective optimization algorithm to better improve generalization. Moreover, our analysis and method can be readily extended to instruction tuning. Extensive experiment results validate our findings and demonstrate our methods' effectiveness in both continued pre-training and instruction tuning. This paper offers new perspectives and insights to interpret and design effective strategies for LLM knowledge learning.