Structured Initialization for Vision Transformers

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from poor generalization and lack convolutional inductive biases—particularly under small-data regimes. Method: This paper proposes a structured weight initialization strategy that injects convolutional priors into ViTs without altering their architecture. Contribution/Results: It is the first work to introduce inductive bias solely through initialization—not architectural modification. Inspired by random impulse filters, the method constructs initial weights to (i) emulate convolutional kernels, (ii) impose structural constraints on attention weights, and (iii) ensure compatibility across diverse architectures—including ViT, Swin Transformer, and MLP-Mixer. Experiments demonstrate substantial improvements over standard initialization on medium- and small-scale benchmarks (e.g., Food-101, CIFAR), while maintaining competitive accuracy on ImageNet-1K. Moreover, the approach enhances model transferability and training stability.

Technology Category

Application Category

📝 Abstract
Convolutional Neural Networks (CNNs) inherently encode strong inductive biases, enabling effective generalization on small-scale datasets. In this paper, we propose integrating this inductive bias into ViTs, not through an architectural intervention but solely through initialization. The motivation here is to have a ViT that can enjoy strong CNN-like performance when data assets are small, but can still scale to ViT-like performance as the data expands. Our approach is motivated by our empirical results that random impulse filters can achieve commensurate performance to learned filters within a CNN. We improve upon current ViT initialization strategies, which typically rely on empirical heuristics such as using attention weights from pretrained models or focusing on the distribution of attention weights without enforcing structures. Empirical results demonstrate that our method significantly outperforms standard ViT initialization across numerous small and medium-scale benchmarks, including Food-101, CIFAR-10, CIFAR-100, STL-10, Flowers, and Pets, while maintaining comparative performance on large-scale datasets such as ImageNet-1K. Moreover, our initialization strategy can be easily integrated into various transformer-based architectures such as Swin Transformer and MLP-Mixer with consistent improvements in performance.
Problem

Research questions and friction points this paper is trying to address.

Improving ViT performance on small datasets via structured initialization
Integrating CNN-like inductive bias into ViTs without architectural changes
Enhancing initialization strategies for transformers across diverse benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates CNN inductive bias via initialization
Uses structured initialization over random heuristics
Compatible with various transformer architectures
🔎 Similar Papers
No similar papers found.