๐ค AI Summary
Vision Transformers (ViTs) lack the inductive bias of convolutional layers, resulting in poor robustness to both integer and fractional image translations; conversely, conventional CNNs suffer from aliasing artifacts introduced by downsampling and nonlinear activations, preventing strict translation invariance. This work proposes a linear cross-covariance attention mechanismโfirst enabling ViTs to achieve continuous equivariance to fractional translations. Coupled with anti-aliasing downsampling and aliasing-resistant nonlinear activations, our approach systematically eliminates all major sources of aliasing. The resulting model maintains state-of-the-art accuracy on ImageNet and other image classification benchmarks while significantly outperforming comparably sized ViT and CNN baselines under adversarial translation perturbations. These results empirically validate the effectiveness and generalization benefits of continuously translation-equivariant representations.
๐ Abstract
Transformers have emerged as a competitive alternative to convnets in vision tasks, yet they lack the architectural inductive bias of convnets, which may hinder their potential performance. Specifically, Vision Transformers (ViTs) are not translation-invariant and are more sensitive to minor image translations than standard convnets. Previous studies have shown, however, that convnets are also not perfectly shift-invariant, due to aliasing in downsampling and nonlinear layers. Consequently, anti-aliasing approaches have been proposed to certify convnets' translation robustness. Building on this line of work, we propose an Alias-Free ViT, which combines two main components. First, it uses alias-free downsampling and nonlinearities. Second, it uses linear cross-covariance attention that is shift-equivariant to both integer and fractional translations, enabling a shift-invariant global representation. Our model maintains competitive performance in image classification and outperforms similar-sized models in terms of robustness to adversarial translations.