Recurrent Knowledge Identification and Fusion for Language Model Continual Learning

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To mitigate catastrophic forgetting in continual learning of large language models (LLMs), this paper proposes a dual-loop adaptive knowledge integration framework. The inner loop enables rapid adaptation to new tasks via dynamic parameter importance estimation, identifying critical parameters on-the-fly; the outer loop progressively integrates historical and newly acquired knowledge through redundancy-aware knowledge pruning and importance-weighted fusion. Inspired by human learning mechanisms, the method introduces— for the first time—the dynamic importance distribution modeling technique and establishes a scalable dual-loop optimization architecture. Evaluated on two mainstream continual learning benchmarks, the approach significantly alleviates forgetting across LLMs ranging from 770M to 13B parameters, achieving state-of-the-art performance without full retraining. It balances computational efficiency and generalization capability, demonstrating strong scalability and practical applicability in resource-constrained continual learning scenarios.

Technology Category

Application Category

📝 Abstract
Continual learning (CL) is crucial for deploying large language models (LLMs) in dynamic real-world environments without costly retraining. While recent model ensemble and model merging methods guided by parameter importance have gained popularity, they often struggle to balance knowledge transfer and forgetting, mainly due to the reliance on static importance estimates during sequential training. In this paper, we present Recurrent-KIF, a novel CL framework for Recurrent Knowledge Identification and Fusion, which enables dynamic estimation of parameter importance distributions to enhance knowledge transfer. Inspired by human continual learning, Recurrent-KIF employs an inner loop that rapidly adapts to new tasks while identifying important parameters, coupled with an outer loop that globally manages the fusion of new and historical knowledge through redundant knowledge pruning and key knowledge merging. These inner-outer loops iteratively perform multiple rounds of fusion, allowing Recurrent-KIF to leverage intermediate training information and adaptively adjust fusion strategies based on evolving importance distributions. Extensive experiments on two CL benchmarks with various model sizes (from 770M to 13B) demonstrate that Recurrent-KIF effectively mitigates catastrophic forgetting and enhances knowledge transfer.
Problem

Research questions and friction points this paper is trying to address.

Dynamic parameter importance estimation
Mitigate catastrophic forgetting
Enhance knowledge transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic parameter importance estimation
Inner-outer loop knowledge fusion
Redundant knowledge pruning strategy
🔎 Similar Papers
No similar papers found.
Y
Yujie Feng
The Hong Kong Polytechnic University
X
Xujia Wang
Tsinghua University
Zexin Lu
Zexin Lu
Sichuan University
S
Shenghong Fu
The Hong Kong Polytechnic University
Guangyuan Shi
Guangyuan Shi
connect.polyu.hk
LLMs FinetuningLarge Language ModelMulti-Task LearningContinual Learning
Yongxin Xu
Yongxin Xu
Peking University
Large Language ModelsKnowledge GraphsElectronic Medical Record Analysis
Y
Yasha Wang
Peking University
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy
X
Xu Chu
Peking University
X
Xiao-Ming Wu
The Hong Kong Polytechnic University