RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model Merging

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe catastrophic forgetting and reliance on historical data in continual learning (CL) for large language models (LLMs), this paper proposes RECALL—a data-free, hierarchical adaptive CL framework. RECALL innovatively employs inter-layer hidden representations as knowledge proxies: it clusters representative samples to compute cross-task representation similarity and performs hierarchical model merging—preserving generic features in shallow layers while integrating task-specific knowledge in deeper layers—without requiring task labels or stored historical data. Evaluated across five NLP tasks and diverse CL settings, RECALL consistently outperforms state-of-the-art baselines, achieving superior knowledge retention and new-task generalization without performance trade-offs. This work establishes a novel paradigm for efficient, scalable online evolution of LLMs.

Technology Category

Application Category

📝 Abstract
We unveil that internal representations in large language models (LLMs) serve as reliable proxies of learned knowledge, and propose RECALL, a novel representation-aware model merging framework for continual learning without access to historical data. RECALL computes inter-model similarity from layer-wise hidden representations over clustered typical samples, and performs adaptive, hierarchical parameter fusion to align knowledge across models. This design enables the preservation of domain-general features in shallow layers while allowing task-specific adaptation in deeper layers. Unlike prior methods that require task labels or incur performance trade-offs, RECALL achieves seamless multi-domain integration and strong resistance to catastrophic forgetting. Extensive experiments across five NLP tasks and multiple continual learning scenarios show that RECALL outperforms baselines in both knowledge retention and generalization, providing a scalable and data-free solution for evolving LLMs.
Problem

Research questions and friction points this paper is trying to address.

Prevents catastrophic forgetting in continual learning without historical data
Aligns knowledge across models via hierarchical parameter fusion
Preserves domain-general features while enabling task-specific adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical model merging for continual learning
Layer-wise parameter fusion using representation similarity
Data-free domain integration without performance trade-offs
🔎 Similar Papers
No similar papers found.
B
Bowen Wang
Shenzhen International Graduate School, Tsinghua University
H
Haiyuan Wan
Shenzhen International Graduate School, Tsinghua University
L
Liwen Shi
Shenzhen International Graduate School, Tsinghua University
C
Chen Yang
The Hong Kong University of Science and Technology, Guangzhou
P
Peng He
Shenzhen International Graduate School, Tsinghua University
Yue Ma
Yue Ma
Bytedance
NLPDialogue SystemLLM
H
Haochen Han
Peng Cheng Laboratory
W
Wenhao Li
Xiamen University
Tiao Tan
Tiao Tan
Phd, tsinghua university
computer visionembodied ai
Yongjian Li
Yongjian Li
nankai university
supply chain
Fangming Liu
Fangming Liu
Professor, School of Computer Science & Technology, Huazhong University of Science & Technology
AI & Cloud ComputingDatacenterLLM SystemEdge ComputingGreen Computing
Y
Yifan Gong
Peng Cheng Laboratory
S
Sheng Zhang
Shenzhen International Graduate School, Tsinghua University