🤖 AI Summary
This work addresses a critical limitation of vision transformers (ViTs), such as DINOv2, whose fixed positional encodings introduce position biases unrelated to semantic content, thereby hindering generalization in zero-shot scenarios like materials science. The study systematically reveals, for the first time, the prevalence of such positional bias across diverse architectures and positional encoding schemes. To mitigate this issue, the authors propose fine-tuning ViTs with ALiBi (Attention with Linear Biases), a relative positional encoding method. Through linear probing analysis, they demonstrate that this approach substantially reduces positional bias while preserving the model’s general-purpose semantic representation capabilities. Consequently, the adapted model exhibits improved adaptability to images lacking directional preferences—such as complex microscopy images—and achieves successful performance in trainable segmentation tasks.
📝 Abstract
Vision transformers (ViTs) - especially feature foundation models like DINOv2 - learn rich representations useful for many downstream tasks. However, architectural choices (such as positional encoding) can lead to these models displaying positional biases and artefacts independent of semantic content. This makes zero-shot adaption difficult in fields like material science, where images are often cross-sections of homogeneous microstructure (i.e. having no preferred direction). In this work, we investigate the positional bias in ViTs via linear probing, finding it present across a range of objectives and positional encodings, and subsequently reduce it by finetuning models to use ALiBi relative positional encoding. We demonstrate that these models retain desirable general semantics and their unbiased features can be used successfully in trainable segmentation of complex microscopy images.