Mimetic Initialization of MLPs

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow convergence of multilayer perceptrons (MLPs) during training by extending mimetic initialization—previously restricted to spatial mixing layers—to channel-mixing MLP layers for the first time. By analyzing the weight structure of pretrained models, the authors propose an extremely simple yet effective initialization strategy that assigns a non-zero mean to the first MLP layer. This approach significantly accelerates training convergence on visual benchmarks such as CIFAR-10 and ImageNet-1k, and further enhances overall performance when combined with existing spatial mixing initialization techniques. The proposed method establishes a new, efficient, and general-purpose initialization paradigm for MLP-based architectures.

Technology Category

Application Category

📝 Abstract
Mimetic initialization uses pretrained models as case studies of good initialization, using observations of structures in trained weights to inspire new, simple initialization techniques. So far, it has been applied only to spatial mixing layers, such convolutional, self-attention, and state space layers. In this work, we present the first attempt to apply the method to channel mixing layers, namely multilayer perceptrons (MLPs). Our extremely simple technique for MLPs -- to give the first layer a nonzero mean -- speeds up training on small-scale vision tasks like CIFAR-10 and ImageNet-1k. Though its effect is much smaller than spatial mixing initializations, it can be used in conjunction with them for an additional positive effect.
Problem

Research questions and friction points this paper is trying to address.

mimetic initialization
multilayer perceptrons
channel mixing layers
weight initialization
training efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mimetic Initialization
MLP
channel mixing
weight initialization
training acceleration
🔎 Similar Papers
No similar papers found.
Asher Trockman
Asher Trockman
Research Scientist, Google
Deep Learning
J
J. Zico Kolter
Carnegie Mellon University