🤖 AI Summary
To mitigate catastrophic forgetting in continual learning of large language models (LLMs), this paper proposes a dual-loop adaptive knowledge integration framework. The inner loop enables rapid adaptation to new tasks via dynamic parameter importance estimation, identifying critical parameters on-the-fly; the outer loop progressively integrates historical and newly acquired knowledge through redundancy-aware knowledge pruning and importance-weighted fusion. Inspired by human learning mechanisms, the method introduces— for the first time—the dynamic importance distribution modeling technique and establishes a scalable dual-loop optimization architecture. Evaluated on two mainstream continual learning benchmarks, the approach significantly alleviates forgetting across LLMs ranging from 770M to 13B parameters, achieving state-of-the-art performance without full retraining. It balances computational efficiency and generalization capability, demonstrating strong scalability and practical applicability in resource-constrained continual learning scenarios.
📝 Abstract
Continual learning (CL) is crucial for deploying large language models (LLMs) in dynamic real-world environments without costly retraining. While recent model ensemble and model merging methods guided by parameter importance have gained popularity, they often struggle to balance knowledge transfer and forgetting, mainly due to the reliance on static importance estimates during sequential training. In this paper, we present Recurrent-KIF, a novel CL framework for Recurrent Knowledge Identification and Fusion, which enables dynamic estimation of parameter importance distributions to enhance knowledge transfer. Inspired by human continual learning, Recurrent-KIF employs an inner loop that rapidly adapts to new tasks while identifying important parameters, coupled with an outer loop that globally manages the fusion of new and historical knowledge through redundant knowledge pruning and key knowledge merging. These inner-outer loops iteratively perform multiple rounds of fusion, allowing Recurrent-KIF to leverage intermediate training information and adaptively adjust fusion strategies based on evolving importance distributions. Extensive experiments on two CL benchmarks with various model sizes (from 770M to 13B) demonstrate that Recurrent-KIF effectively mitigates catastrophic forgetting and enhances knowledge transfer.