🤖 AI Summary
To address the high cost of repeatedly retraining LoRA adapters due to frequent large language model (LLM) updates, this paper proposes the first modular LoRA transfer framework enabling cross-version reuse of pre-trained LoRA weights. Methodologically: (i) we design an automatic layer- and head-level mapping mechanism based on Centered Kernel Alignment (CKA) and cosine similarity; (ii) we construct parameter transfer matrices to project LoRA weights across model versions; and (iii) we incorporate lightweight fine-tuning to ensure numerical stability. Evaluated on mathematical reasoning tasks using MiniCPM and Qwen, our approach achieves average improvements of 1.4 and 6.6 points over full retraining, respectively, while reducing memory consumption by 5.5 GB and decreasing training time by 78.23%. This work is the first to systematically tackle the challenge of continuous LoRA adaptation under LLM version evolution, establishing a new paradigm for efficient, green, and sustainable lightweight LLM adaptation.
📝 Abstract
As Large Language Models (LLMs) are frequently updated, LoRA weights trained on earlier versions quickly become obsolete. The conventional practice of retraining LoRA weights from scratch on the latest model is costly, time-consuming, and environmentally detrimental, particularly as the diversity of LLMs and downstream tasks expands. This motivates a critical question:"How can we efficiently leverage existing LoRA weights to adapt to newer model versions?"To address this, we propose LoRASuite, a modular approach tailored specifically to various types of LLM updates. First, we compute a transfer matrix utilizing known parameters from both old and new LLMs. Next, we allocate corresponding layers and attention heads based on centered kernel alignment and cosine similarity metrics, respectively. A subsequent small-scale, skillful fine-tuning step ensures numerical stability. Experimental evaluations demonstrate that LoRASuite consistently surpasses small-scale vanilla LoRA methods. Notably, on backbone LLMs such as MiniCPM and Qwen, LoRASuite even exceeds the performance of full-scale LoRA retraining, with average improvements of +1.4 and +6.6 points on math tasks, respectively. Additionally, LoRASuite significantly reduces memory consumption by 5.5 GB and computational time by 78.23%.