🤖 AI Summary
This work addresses the lack of equivariance to geometric transformations—such as rotation and mirroring—in standard Vision Transformers (ViTs) when applied to histopathology image analysis, which leads to unstable representations under varying image orientations. To overcome this limitation, the authors propose a novel ViT architecture that incorporates rotation-equivariant convolutional kernels directly into the patch embedding stage, thereby endowing the model with explicit rotational equivariance and enabling intrinsically invariant modeling of geometric transformations. Evaluated on a public colorectal cancer dataset, the proposed method demonstrates significantly improved classification stability and data efficiency, exhibiting enhanced robustness and generalization performance particularly when tested on images captured under multiple orientations.
📝 Abstract
Vision Transformers (ViTs) have gained rapid adoption in computational pathology for their ability to model long-range dependencies through self-attention, addressing the limitations of convolutional neural networks that excel at local pattern capture but struggle with global contextual reasoning. Recent pathology-specific foundation models have further advanced performance by leveraging large-scale pretraining. However, standard ViTs remain inherently non-equivariant to transformations such as rotations and reflections, which are ubiquitous variations in histopathology imaging. To address this limitation, we propose Equi-ViT, which integrates an equivariant convolution kernel into the patch embedding stage of a ViT architecture, imparting built-in rotational equivariance to learned representations. Equi-ViT achieves superior rotation-consistent patch embeddings and stable classification performance across image orientations. Our results on a public colorectal cancer dataset demonstrate that incorporating equivariant patch embedding enhances data efficiency and robustness, suggesting that equivariant transformers could potentially serve as more generalizable backbones for the application of ViT in histopathology, such as digital pathology foundation models.