Accurate and Efficient Low-Rank Model Merging in Core Space

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing efficiency and cross-task accuracy in LoRA model merging, this paper proposes Core Space: a novel framework that constructs a shared alignment basis space, enabling lossless projection of task-specific LoRA weights into a unified low-dimensional core subspace prior to fusion. We theoretically prove that this projection preserves the original rank and information integrity—thereby avoiding representational collapse inherent in conventional weighted averaging. Core Space integrates low-rank adaptation, orthogonal projection-based alignment, and computational complexity optimization to achieve efficient weight aggregation within the core subspace. Evaluated on diverse vision and language multi-task benchmarks, Core Space achieves state-of-the-art performance while incurring only ~20% of the computational overhead of existing merging methods. It thus delivers superior accuracy, resource efficiency, and strong generalization across tasks.

Technology Category

Application Category

📝 Abstract
In this paper, we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. While fine-tuning models with LoRA is highly efficient, existing merging methods often sacrifice this efficiency by merging fully-sized weight matrices. We propose the Core Space merging framework, which enables the merging of LoRA-adapted models within a common alignment basis, thereby preserving the efficiency of low-rank adaptation while substantially improving accuracy across tasks. We further provide a formal proof that projection into Core Space ensures no loss of information and provide a complexity analysis showing the efficiency gains. Extensive empirical results demonstrate that Core Space significantly improves existing merging techniques and achieves state-of-the-art results on both vision and language tasks while utilizing a fraction of the computational resources. Codebase is available at https://github.com/apanariello4/core-space-merging.
Problem

Research questions and friction points this paper is trying to address.

Merging low-rank adapted neural networks efficiently without sacrificing accuracy
Preserving adaptation efficiency while improving merging accuracy across multiple tasks
Enabling model merging in common alignment basis to reduce computational resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Merging models in Core Space alignment basis
Preserving efficiency while improving accuracy
Ensuring no information loss via projection
🔎 Similar Papers
No similar papers found.