$ extit{New News}$: System-2 Fine-tuning for Robust Integration of New Knowledge

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical limitation in large language models (LLMs): fine-tuning (FT) exhibits significantly weaker knowledge internalization than in-context learning (ICL), a phenomenon termed the “context masking effect.” To address this, we propose System-2 Fine-Tuning (Sys2-FT), a novel paradigm that leverages self-play mechanisms—including self-questioning, paraphrasing, and logical inference—to generate cross-domain synthetic news data, thereby explicitly distilling implicit ICL-acquired knowledge into model weights. Evaluated on the Qwen 2.5 series, Sys2-FT substantially enhances models’ weight-level acquisition of novel knowledge and markedly narrows the performance gap between FT and ICL across diverse domains. Notably, we discover for the first time that Sys2-FT effectiveness follows a scalable power-law with increasing model size—a finding that establishes a new, efficient paradigm for knowledge injection into LLMs.

Technology Category

Application Category

📝 Abstract
Humans and intelligent animals can effortlessly internalize new information ("news") and accurately extract the implications for performing downstream tasks. While large language models (LLMs) can achieve this through in-context learning (ICL) when the news is explicitly given as context, fine-tuning remains challenging for the models to consolidate learning in weights. In this paper, we introduce $ extit{New News}$, a dataset composed of hypothetical yet plausible news spanning multiple domains (mathematics, coding, discoveries, leaderboards, events), accompanied by downstream evaluation questions whose correct answers critically depend on understanding and internalizing the news. We first demonstrate a substantial gap between naive fine-tuning and in-context learning (FT-ICL gap) on our news dataset. To address this gap, we explore a suite of self-play data generation protocols -- paraphrases, implications and Self-QAs -- designed to distill the knowledge from the model with context into the weights of the model without the context, which we term $ extit{System-2 Fine-tuning}$ (Sys2-FT). We systematically evaluate ICL and Sys2-FT performance across data domains and model scales with the Qwen 2.5 family of models. Our results demonstrate that the self-QA protocol of Sys2-FT significantly improves models' in-weight learning of the news. Furthermore, we discover the $ extit{contexual shadowing effect}$, where training with the news $ extit{in context}$ followed by its rephrases or QAs degrade learning of the news. Finally, we show preliminary evidence of an emerging scaling law of Sys2-FT.
Problem

Research questions and friction points this paper is trying to address.

Bridging gap between fine-tuning and in-context learning for knowledge integration
Enhancing model's in-weight learning of new information via System-2 Fine-tuning
Addressing contextual shadowing effect in training with rephrases and QAs
Innovation

Methods, ideas, or system contributions that make the work stand out.

System-2 Fine-tuning for robust knowledge integration
Self-play data generation for weight distillation
Scaling law evidence for Sys2-FT performance
🔎 Similar Papers
No similar papers found.