🤖 AI Summary
This work addresses the data-free neural network averaging problem: synthesizing a single high-performance model solely from the final weights of multiple pre-trained models independently trained on disjoint data subsets. To this end, we propose the Amortized Model Ensembling (AME) framework, which models inter-model weight discrepancies as pseudo-gradients to guide adaptive weight fusion, and employs a data-agnostic meta-optimization strategy to perform ensemble learning without access to original training data. Compared to conventional Model Soup, AME significantly improves out-of-distribution generalization and consistently outperforms individual expert models and baseline methods across multiple benchmarks, demonstrating its effectiveness, robustness, and scalability. The core innovation lies in the first explicit interpretation of weight differences as optimizable pseudo-gradient signals, enabling efficient, adaptive, and truly data-free model averaging.
📝 Abstract
What does it even mean to average neural networks? We investigate the problem of synthesizing a single neural network from a collection of pretrained models, each trained on disjoint data shards, using only their final weights and no access to training data. In forming a definition of neural averaging, we take insight from model soup, which appears to aggregate multiple models into a singular model while enhancing generalization performance. In this work, we reinterpret model souping as a special case of a broader framework: Amortized Model Ensembling (AME) for neural averaging, a data-free meta-optimization approach that treats model differences as pseudogradients to guide neural weight updates. We show that this perspective not only recovers model soup but enables more expressive and adaptive ensembling strategies. Empirically, AME produces averaged neural solutions that outperform both individual experts and model soup baselines, especially in out-of-distribution settings. Our results suggest a principled and generalizable notion of data-free model weight aggregation and defines, in one sense, how to perform neural averaging.