🤖 AI Summary
Multi-expert model fusion suffers from performance degradation due to rank collapse in the task vector space—a phenomenon newly identified in this work. Method: We propose a subspace-enhanced fusion framework comprising (1) subspace regularization and projection enhancement to explicitly preserve the rank of the task vector space; (2) higher-order generalized singular value decomposition (HO-GSVD) to quantify task similarity, improving interpretability; and (3) efficient fusion via task arithmetic combined with SVD. Results: On visual benchmarks, fusing 20 expert models yields over 10% performance gain, significantly mitigating marginal diminishing returns while enhancing robustness and cross-task generalization.
📝 Abstract
Model merging enables the combination of multiple specialized expert models into a single model capable of performing multiple tasks. However, the benefits of merging an increasing amount of specialized experts generally lead to diminishing returns and reduced overall performance gains. In this work, we offer an explanation and analysis from a task arithmetic perspective; revealing that as the merging process (across numerous existing merging methods) continues for more and more experts, the associated task vector space experiences rank collapse. To mitigate this issue, we introduce Subspace Boosting, which operates on the singular value decomposed task vector space and maintains task vector ranks. Subspace Boosting raises merging efficacy for up to 20 expert models by large margins of more than 10% when evaluated on vision benchmarks. Moreover, we propose employing Higher-Order Generalized Singular Value Decomposition to further quantify task similarity, offering a new interpretable perspective on model merging.