DC-Merge: Improving Model Merging with Directional Consistency

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of insufficient knowledge retention in multi-task model merging, which often arises from misaligned directions of task vectors in the singular subspace. To resolve this, the authors propose a novel mechanism that first smooths the singular values of task vectors to balance their energy distribution and then projects them onto a shared orthogonal subspace to align their directional geometry. The aligned vectors are aggregated within this subspace and subsequently mapped back to the original parameter space. This approach is the first to enforce directional consistency in the singular subspace while jointly optimizing both energy distribution and geometric structure, thereby significantly enhancing multi-task knowledge retention. Extensive experiments on vision and vision-language benchmarks demonstrate that the method achieves state-of-the-art performance, consistently outperforming existing model merging techniques.

Technology Category

Application Category

📝 Abstract
Model merging aims to integrate multiple task-adapted models into a unified model that preserves the knowledge of each task. In this paper, we identify that the key to this knowledge retention lies in maintaining the directional consistency of singular spaces between merged multi-task vector and individual task vectors. However, this consistency is frequently compromised by two issues: i) an imbalanced energy distribution within task vectors, where a small fraction of singular values dominate the total energy, leading to the neglect of semantically important but weaker components upon merging, and ii) the geometric inconsistency of task vectors in parameter space, which causes direct merging to distort their underlying directional geometry. To address these challenges, we propose DC-Merge, a method for directional-consistent model merging. It first balances the energy distribution of each task vector by smoothing its singular values, ensuring all knowledge components are adequately represented. These energy-balanced vectors are then projected onto a shared orthogonal subspace to align their directional geometries with minimal reconstruction error. Finally, the aligned vectors are aggregated in the shared orthogonal subspace and projected back to the original parameter space. Extensive experiments on vision and vision-language benchmarks show that DC-Merge consistently achieves state-of-the-art performance in both full fine-tuning and LoRA settings. The implementation code is available at https://github.com/Tobeginwith/DC-Merge.
Problem

Research questions and friction points this paper is trying to address.

model merging
directional consistency
energy distribution
geometric inconsistency
singular spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

directional consistency
model merging
singular value smoothing
orthogonal subspace projection
energy balancing
🔎 Similar Papers
No similar papers found.
H
Han-Chen Zhang
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
Z
Zi-Hao Zhou
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
M
Mao-Lin Luo
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
S
Shimin Di
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
Min-Ling Zhang
Min-Ling Zhang
Professor, School of Computer Science and Engineering, Southeast University, China
Artificial IntelligenceMachine LearningData Mining
Tong Wei
Tong Wei
Southeast University
Machine Learning