🤖 AI Summary
Multi-task pre-trained model fusion suffers from internal covariate shift and distribution mismatch due to independent layer-wise processing, which neglects inter-layer dependencies. Method: We propose ChainFusion—a chain-based fusion framework that models distribution shift as layerwise-propagating internal covariate shift. It dynamically updates inter-layer activation statistics in an autoregressive manner, explicitly capturing cross-layer dependencies—without introducing extra parameters or requiring retraining—achieving conditionally optimal collaborative fusion. Contribution/Results: ChainFusion is the first method to systematically attribute distribution shift in model fusion to statistical propagation mismatch across layers, and accordingly designs a differentiable, scalable chain-structured merging mechanism. Extensive experiments on multiple standard multi-task benchmarks demonstrate significant improvements over existing fusion approaches, effectively mitigating performance degradation.
📝 Abstract
Fine-tuning pretrained models has become a standard pathway to achieve state-of-the-art performance across a wide range of domains, leading to a proliferation of task-specific model variants. As the number of such specialized modules in-creases, merging them into a unified model without retraining has become a critical challenge. Existing merging techniques often rely on interference heuristics,importance weighting, or activation matching while treating each layer independently, thereby failing to account for the inter-layer dependencies inherent in deep networks. This simplification leads to distributional mismatches, especially inactivation-based methods, when changes in early layers are not properly reflected in downstream ones. We identify these mismatches as a form of internal covariate shift, comparable to the phenomenon encountered in the initial phases of neural networks training. To address it, we propose Chain of Merges (CoM), a layer-wise merging procedure that updates activation statistics in an auto-regressive fashion, explicitly accounting for cross-layer interactions. CoM produces a coherent merged model through a series of conditionally optimal updates, effectively mitigating degradation caused by covariate shift. Experiments on standard bench-marks demonstrate that CoM achieves state-of-the-art performance.