How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges in large language model (LLM) pretraining—insufficient domain knowledge injection leading to weak specialization, and excessive injection causing catastrophic forgetting—this paper proposes a knowledge injection scaling law grounded in critical collapse point analysis. Through systematic experiments across multiple model scales, we identify a model-size-dependent critical threshold for knowledge injection volume and formulate a generalizable scaling law to optimally predict injection magnitude and enable cross-scale transfer. Our method significantly improves downstream task performance across diverse model sizes and pretraining token budgets, while preserving general-purpose capabilities and preventing memory collapse. The core contribution is the first empirical discovery of a principled scaling relationship governing knowledge injection magnitude with model scale, yielding an interpretable, quantifiable, and reusable injection strategy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have attracted significant attention due to their impressive general capabilities across diverse downstream tasks. However, without domain-specific optimization, they often underperform on specialized knowledge benchmarks and even produce hallucination. Recent studies show that strategically infusing domain knowledge during pretraining can substantially improve downstream performance. A critical challenge lies in balancing this infusion trade-off: injecting too little domain-specific data yields insufficient specialization, whereas excessive infusion triggers catastrophic forgetting of previously acquired knowledge. In this work, we focus on the phenomenon of memory collapse induced by over-infusion. Through systematic experiments, we make two key observations, i.e. 1) Critical collapse point: each model exhibits a threshold beyond which its knowledge retention capabilities sharply degrade. 2) Scale correlation: these collapse points scale consistently with the model's size. Building on these insights, we propose a knowledge infusion scaling law that predicts the optimal amount of domain knowledge to inject into large LLMs by analyzing their smaller counterparts. Extensive experiments across different model sizes and pertaining token budgets validate both the effectiveness and generalizability of our scaling law.
Problem

Research questions and friction points this paper is trying to address.

Balancing domain knowledge infusion trade-off in large language models
Preventing catastrophic forgetting during domain-specific knowledge injection
Predicting optimal knowledge infusion amounts across different model sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed knowledge infusion scaling law
Predicts optimal domain knowledge injection
Based on smaller model analysis
🔎 Similar Papers
No similar papers found.
K
Kangtao Lv
Zhejiang University
H
Haibin Chen
Taobao & Tmall Group of Alibaba
Y
Yujin Yuan
Taobao & Tmall Group of Alibaba
Langming Liu
Langming Liu
PhD, City University of Hongkong
RecommendationLarge Language ModelsFederated Learning
S
Shilei Liu
Taobao & Tmall Group of Alibaba
Yongwei Wang
Yongwei Wang
Zhejiang University
AI4MediaMultimedia ForensicsTrust Media
W
Wenbo Su
Taobao & Tmall Group of Alibaba
B
Bo Zheng
Taobao & Tmall Group of Alibaba