🤖 AI Summary
This work investigates the intrinsic relationship between model capacity and the minimal number of visual tokens required to preserve image semantics. We introduce “visual semantic complexity” as the smallest basis size needed to span the semantic space of an image, and empirically observe that larger models substantially reduce the requisite token count. To this end, we propose a lightweight Adaptive Orthogonal Filtering (AOF) module that constructs compact, scalable semantic bases via orthogonal clustering of redundant tokens, coupled with dynamic basis selection guided by the Minimum Description Length principle. We validate a power-law scaling relationship between token count and model parameters across diverse ViT architectures. On benchmarks including ImageNet, our method retains ≥98% of full-token performance using ≤30% of the original tokens. Additionally, we release VLC-1M—the first high-quality, long visual-context modeling dataset—publicly available to the research community.
📝 Abstract
This paper investigates the fundamental relationship between model capacity and the minimal number of visual tokens required to preserve image semantics. Inspired by the Minimum Description Length principle, we reinterpret image tokens as vectors in a visual semantic space and define the intrinsic semantic complexity of an image as the smallest set of basis vectors needed to span this space. Building on this perspective, we propose Orthogonal Filtering, a lightweight module that adaptively clusters redundant tokens into a compact set of orthogonal bases. Through extensive experiments across a range of ViT models, we reveal a consistent token, model scaling law: larger models require significantly fewer tokens to span visual semantic space. Besides, we also contribute a visual long-context dataset.