What DINO saw: ALiBi positional encoding reduces positional bias in Vision Transformers

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation of vision transformers (ViTs), such as DINOv2, whose fixed positional encodings introduce position biases unrelated to semantic content, thereby hindering generalization in zero-shot scenarios like materials science. The study systematically reveals, for the first time, the prevalence of such positional bias across diverse architectures and positional encoding schemes. To mitigate this issue, the authors propose fine-tuning ViTs with ALiBi (Attention with Linear Biases), a relative positional encoding method. Through linear probing analysis, they demonstrate that this approach substantially reduces positional bias while preserving the model’s general-purpose semantic representation capabilities. Consequently, the adapted model exhibits improved adaptability to images lacking directional preferences—such as complex microscopy images—and achieves successful performance in trainable segmentation tasks.

Technology Category

Application Category

📝 Abstract
Vision transformers (ViTs) - especially feature foundation models like DINOv2 - learn rich representations useful for many downstream tasks. However, architectural choices (such as positional encoding) can lead to these models displaying positional biases and artefacts independent of semantic content. This makes zero-shot adaption difficult in fields like material science, where images are often cross-sections of homogeneous microstructure (i.e. having no preferred direction). In this work, we investigate the positional bias in ViTs via linear probing, finding it present across a range of objectives and positional encodings, and subsequently reduce it by finetuning models to use ALiBi relative positional encoding. We demonstrate that these models retain desirable general semantics and their unbiased features can be used successfully in trainable segmentation of complex microscopy images.
Problem

Research questions and friction points this paper is trying to address.

Vision Transformers
positional bias
positional encoding
zero-shot adaptation
microscopy images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Transformers
positional bias
ALiBi
DINOv2
relative positional encoding
🔎 Similar Papers
No similar papers found.
M
Moritz Pawlowsky
Center for Materials Research, Justus Liebig University, 35392 Giessen
A
Antonis Vamvakeros
Dyson School of Design Engineering, Imperial College London, London; Finden ltd, Building R71, Rutherford Appleton Laboratory, Harwell Science and Innovation Campus
Alexander Weiss
Alexander Weiss
Brown University
Computer Vision
A
Anja Bielefeld
Center for Materials Research, Justus Liebig University, 35392 Giessen
S
Samuel J. Cooper
Dyson School of Design Engineering, Imperial College London, London
R
Ronan Docherty
Dyson School of Design Engineering, Imperial College London, London; Department of Materials, Imperial College London, London