🤖 AI Summary
This work addresses the scalability bottleneck in model merging—specifically, the performance degradation observed as the number of experts increases. We establish, for the first time, a theoretical framework grounded in Gaussian width and approximate kinematics, revealing parameter-space saturation as the fundamental limiting factor; we further prove that performance gains exhibit strictly concave decay and admit a unique optimal merging threshold. Building on this insight, we propose Reparameterized Heavy-Tailed (RHT) merging, which alleviates saturation constraints via heavy-tailed reparameterization of expert weights. Extensive evaluation across 12 knowledge-intensive and general-purpose benchmarks demonstrates that RHT significantly delays performance decay and raises the upper bound for multi-task fusion. The implementation is open-sourced. To our knowledge, this is the first theoretically grounded paradigm for scalable model merging, offering provable guarantees on convergence behavior and capacity limits.
📝 Abstract
Model merging dramatically reduces storage and computational resources by combining multiple expert models into a single multi-task model. Although recent model merging methods have shown promising results, they struggle to maintain performance gains as the number of merged models increases. In this paper, we investigate the key obstacles that limit the scalability of model merging when integrating a large number of expert models. First, we prove that there is an upper bound on model merging. Further theoretical analysis reveals that the limited effective parameter space imposes a strict constraint on the number of models that can be successfully merged. Gaussian Width shows that the marginal benefit of merging additional models diminishes according to a strictly concave function. This implies that the effective parameter space becomes rapidly saturated as the number of merged models increases. Furthermore, using Approximate Kinematics Theory, we prove the existence of a unique optimal threshold beyond which adding more models does not yield significant performance improvements. At the same time, we introduce a straightforward Reparameterized Heavy-Tailed method (RHT) to extend the coverage of the merged model, thereby enhancing its performance. Empirical results on 12 benchmarks, including both knowledge-intensive and general-purpose tasks, validate our theoretical analysis. We believe that these results spark further research beyond the current scope of model merging. The source code is in the anonymous Github repository https://github.com/wzj1718/ModelMergingAnalysis.