🤖 AI Summary
This work addresses the limited high-resolution spatial understanding of standard Vision Transformers (ViTs) in dense prediction tasks such as open-vocabulary segmentation, which stems from their fixed pretraining resolution and coarse image patch granularity. To overcome this, the authors propose a resolution-agnostic, single-pass ViT architecture that leverages feature-level knowledge distillation to transfer spatial reasoning capabilities from a sliding-window teacher model to a student model. The approach requires only a feature regression loss—without architectural modifications or pixel-level supervision—to enable efficient high-resolution feature extraction. Experiments demonstrate that the method achieves up to a 10.5% absolute improvement in single-pass mIoU on open-vocabulary segmentation tasks, surpassing the performance of the more computationally expensive sliding-window teacher model and effectively breaking the efficiency bottleneck of conventional high-resolution processing pipelines.
📝 Abstract
Foundational Vision Transformers (ViTs) have limited effectiveness in tasks requiring fine-grained spatial understanding, due to their fixed pre-training resolution and inherently coarse patch-level representations. These challenges are especially pronounced in dense prediction scenarios, such as open-vocabulary segmentation with ViT-based vision-language models, where high-resolution inputs are essential for accurate pixel-level reasoning. Existing approaches typically process large-resolution images using a sliding-window strategy at the pre-training resolution. While this improves accuracy through finer strides, it comes at a significant computational cost. We introduce SPAR: Single-Pass Any-Resolution ViT, a resolution-agnostic dense feature extractor designed for efficient high-resolution inference. We distill the spatial reasoning capabilities of a finely-strided, sliding-window teacher into a single-pass student using a feature regression loss, without requiring architectural changes or pixel-level supervision. Applied to open-vocabulary segmentation, SPAR improves single-pass baselines by up to 10.5 mIoU and even surpasses the teacher, demonstrating effectiveness in efficient, high-resolution reasoning. Code: https://github.com/naomikombol/SPAR