🤖 AI Summary
This work addresses the challenge of theoretically grounded, data-free model fusion that minimizes interference across tasks. Framing model fusion as a layer-wise optimization problem, the study introduces the first data-free covariance estimation method that directly derives inter-layer covariances from model difference matrices to minimize task interference. The proposed approach is both theoretically principled and computationally efficient, demonstrating consistent effectiveness across vision and language benchmarks spanning model sizes from 86M to 7B parameters. It significantly outperforms existing data-free fusion techniques while eliminating the need for auxiliary data.
📝 Abstract
Model merging provides a way of cheaply combining individual models to produce a model that inherits each individual's capabilities. While some merging methods can approach the performance of multitask training, they are often heuristically motivated and lack theoretical justification. A principled alternative is to pose model merging as a layer-wise optimization problem that directly minimizes interference between tasks. However, this formulation requires estimating per-layer covariance matrices from data, which may not be available when performing merging. In contrast, many of the heuristically-motivated methods do not require auxiliary data, making them practically advantageous. In this work, we revisit the interference minimization framework and show that, under certain conditions, covariance matrices can be estimated directly from difference matrices, eliminating the need for data while also reducing computational costs. We validate our approach across vision and language benchmarks on models ranging from 86M parameters to 7B parameters, outperforming previous data-free state-of-the-art merging methods