🤖 AI Summary
This work addresses the slow convergence of multilayer perceptrons (MLPs) during training by extending mimetic initialization—previously restricted to spatial mixing layers—to channel-mixing MLP layers for the first time. By analyzing the weight structure of pretrained models, the authors propose an extremely simple yet effective initialization strategy that assigns a non-zero mean to the first MLP layer. This approach significantly accelerates training convergence on visual benchmarks such as CIFAR-10 and ImageNet-1k, and further enhances overall performance when combined with existing spatial mixing initialization techniques. The proposed method establishes a new, efficient, and general-purpose initialization paradigm for MLP-based architectures.
📝 Abstract
Mimetic initialization uses pretrained models as case studies of good initialization, using observations of structures in trained weights to inspire new, simple initialization techniques. So far, it has been applied only to spatial mixing layers, such convolutional, self-attention, and state space layers. In this work, we present the first attempt to apply the method to channel mixing layers, namely multilayer perceptrons (MLPs). Our extremely simple technique for MLPs -- to give the first layer a nonzero mean -- speeds up training on small-scale vision tasks like CIFAR-10 and ImageNet-1k. Though its effect is much smaller than spatial mixing initializations, it can be used in conjunction with them for an additional positive effect.