🤖 AI Summary
Deep neural network model fusion is hindered by linear mode connectivity (LMC) barriers arising from weight-space solution dispersion, primarily due to inconsistent neuron permutations across distinct training configurations. To address this, we propose Training-time Neuron Alignment (TNA), a mechanism that enhances fusion performance without increasing inference overhead. TNA introduces permutation subspaces into the training phase—enabling lossless neuron alignment for the first time—and instantiates the TNA-PFN algorithm, which we theoretically prove reduces LMC barriers and supports federated fusion under heterogeneous data. Experiments demonstrate substantial improvements: enhanced generalization of Vision Transformers (ViTs) in Model Soup, superior performance of large language models (LLMs) in ColD fusion, and high-accuracy, low-communication wide-model fusion in federated learning settings.
📝 Abstract
In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape. Overcoming these barriers is crucial for understanding deep learning dynamics and enhancing model-fusion algorithms. Previous studies highlight the role of permutation symmetry in reducing post-training barriers through network permutation. However, these post-hoc methods, demanding extra computations, are less effective for larger, complex models (e.g., ViT, LLM) due to numerous permutation matrices. Thus, in this paper, we study training-time neuron alignment. Our hypothesis suggests that training-time permutation subspace can reduce LMC barriers for free. We find that pruning at initialization supports this. Beyond pruning, we introduce TNA-PFN, a simple yet lossless algorithm using a partial gradient mask during training. TNA-PFN is theoretically and empirically validated for reducing LMC barriers. It excels in wide model fusion applications, especially in federated learning, two algorithms based on TNA-FPN that are proposed to show its prospects even under heterogeneous datasets. Moreover, TNA-PFN can enhance the generalization of model soup for vision transformers and ColD fusion for pretrained language models.