SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between downstream task performance improvement and preservation of general capabilities during large language model (LLM) fine-tuning, this paper proposes Soft LoRA-Identity Mixture MoE—a novel mixture-of-experts framework. It employs a soft-parameterized LoRA module jointly routed with identity mappings via dynamic gating, enabling adaptive collaboration between adapters and skip connections. A sliding-cluster-weighted routing mechanism is introduced to enhance out-of-domain sample discrimination. Furthermore, LoRA integration is formulated as a fast dynamic model merging paradigm. Without incurring additional inference overhead and while tuning only a small number of parameters, our method achieves downstream performance on par with state-of-the-art PEFT approaches. Crucially, it significantly mitigates catastrophic forgetting—outperforming existing methods—and improves general capability retention by an average of 12.7% across multi-domain transfer evaluations.

Technology Category

Application Category

📝 Abstract
Although many efforts have been made, it is still a challenge to balance the training budget, downstream performance, and the general capabilities of the LLMs in many applications. Training the whole model for downstream tasks is expensive, and could easily result in catastrophic forgetting. By introducing parameter-efficient fine-tuning (PEFT), the training cost could be reduced, but it still suffers from forgetting, and limits the learning on the downstream tasks. To efficiently fine-tune the LLMs with less limitation to their downstream performance while mitigating the forgetting of general capabilities, we propose a novel mixture of expert (MoE) framework based on Soft LoRA and Identity Mixture (SLIM), that allows dynamic routing between LoRA adapters and skipping connection, enables the suppression of forgetting. We adopt weight-yielding with sliding clustering for better out-of-domain distinguish to enhance the routing. We also propose to convert the mixture of low-rank adapters to the model merging formulation and introduce fast dynamic merging of LoRA adapters to keep the general capabilities of the base model. Extensive experiments demonstrate that the proposed SLIM is comparable to the state-of-the-art PEFT approaches on the downstream tasks while achieving the leading performance in mitigating catastrophic forgetting.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Language Models
Knowledge Retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

SLIM Framework
Soft LoRA
Identity Mixture
🔎 Similar Papers
No similar papers found.
J
Jiayi Han
Inspur Genersoft Co. Ltd., Inspur Group Co. Ltd., Shandong Key Laboratory of Automated Complex Network Software Construction
Liang Du
Liang Du
Associate Professor, Villanova University
electric power systems
Hongwei Du
Hongwei Du
Inspur Genersoft Co. Ltd., Inspur Group Co. Ltd., Shandong Key Laboratory of Automated Complex Network Software Construction
X
Xiangguo Zhou
Inspur Genersoft Co. Ltd., Inspur Group Co. Ltd., Shandong Key Laboratory of Automated Complex Network Software Construction
Yiwen Wu
Yiwen Wu
Lehigh University
HCI
W
Weibo Zheng
Inspur Genersoft Co. Ltd., Inspur Group Co. Ltd., Shandong Key Laboratory of Automated Complex Network Software Construction
D
Donghong Han
Northeastern University, China