ID-LoRA: Efficient Low-Rank Adaptation Inspired by Matrix Interpolative Decomposition

πŸ“… 2026-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing low-rank adaptation methods struggle to balance parameter efficiency and performance in multi-task large language model settings, as reducing rank often leads to significant degradation in model capability. This work proposes ID-LoRA, which introduces matrix interpolation decomposition into parameter-efficient fine-tuning (PEFT) for the first time. By clustering pre-trained weights and reusing a single shared trainable low-rank matrix through interpolation-based parameter clustering, ID-LoRA achieves highly efficient parameter reuse. The method establishes a novel multi-task low-rank adaptation framework grounded in interpolation decomposition, outperforming full fine-tuning and state-of-the-art PEFT approaches across five benchmarks. It reduces parameters by up to 46% compared to standard LoRA and achieves superior performance on Code and MMLU tasks using only 54% of LoRA’s parameters, effectively overcoming the trade-off between parameter count and model performance in low-rank adaptation.

Technology Category

Application Category

πŸ“ Abstract
LoRA has become a universal Parameter-Efficient Fine-Tuning (PEFT) technique that equips Large Language Models (LLMs) to adapt quickly to new tasks. However, when these models are scaled up, even the latest LoRA variants still introduce considerable overhead in trainable parameters. Conversely, aggressively lowering the rank to curb this overhead markedly degrades performance in complex multi-task settings. We propose ID-LoRA, a novel PEFT framework that breaks the trade-off. Its core innovation lies in extracting and reusing clustered parameter groups from the pretrained weight matrix. These groups are then used to form multiple low-rank components, all of which share only a single initialized trainable low-rank matrix. This approach cuts the number of trainable parameters while keeping the model's capacity intact. We evaluate ID-LoRA on five diverse benchmarks: Mathematical Reasoning, Code Generation, MMLU, CommonsenseQA, and Safety Alignment. ID-LoRA outperforms both full fine-tuning and existing PEFT baselines (e.g., LoRA, DoRA, HydraLoRA) while using up to 46% fewer trainable parameters than the standard LoRA. In multi-task scenarios, it surpasses LoRA and its recent variants (e.g., DoRA and HydraLoRA) on both Code and MMLU tasks, yet requires only 54% of the trainable parameters demanded by the conventional LoRA.
Problem

Research questions and friction points this paper is trying to address.

Parameter-Efficient Fine-Tuning
Low-Rank Adaptation
Large Language Models
Trainable Parameters
Multi-task Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

ID-LoRA
Low-Rank Adaptation
Parameter-Efficient Fine-Tuning
Matrix Interpolative Decomposition
Multi-task Learning
πŸ”Ž Similar Papers
No similar papers found.
X
Xindian Ma
Tianjin University, Tianjin, China
R
Rundong Kong
Tianjin University, Tianjin, China
Peng Zhang
Peng Zhang
Professor, Tianjin University
Information RetrievalMachine LearningNatural Language Processing
R
Ruoxiang Huang
Tianjin University, Tianjin, China
Y
Yongyu Jiang
Tianjin University, Tianjin, China