🤖 AI Summary
Traditional vision Transformers rely on discrete image patching, which constrains the effective deployment of sparsity mechanisms and hinders simultaneous optimization of accuracy and efficiency. To address this, we propose subpixel tokenization—a novel, differentiable tokenization scheme that enables dynamic, subpixel-precise token placement in continuous image space, thereby breaking free from rigid discrete grid constraints. Our approach integrates an oracle-guided search mechanism to adaptively optimize token spatial distribution, enhancing representational capacity while preserving sparsity. Experiments demonstrate that our method significantly reduces the number of tokens required during inference—by 30–50% on average—while improving classification and detection accuracy. It achieves superior accuracy–computation trade-offs on standard benchmarks including ImageNet and COCO. Moreover, the resulting models exhibit enhanced interpretability and architectural flexibility, offering a principled pathway toward efficient, high-fidelity vision representation learning.
📝 Abstract
Vision Transformers naturally accommodate sparsity, yet standard tokenization methods confine features to discrete patch grids. This constraint prevents models from fully exploiting sparse regimes, forcing awkward compromises. We propose Subpixel Placement of Tokens (SPoT), a novel tokenization strategy that positions tokens continuously within images, effectively sidestepping grid-based limitations. With our proposed oracle-guided search, we uncover substantial performance gains achievable with ideal subpixel token positioning, drastically reducing the number of tokens necessary for accurate predictions during inference. SPoT provides a new direction for flexible, efficient, and interpretable ViT architectures, redefining sparsity as a strategic advantage rather than an imposed limitation.