Fewer Tokens, Greater Scaling: Self-Adaptive Visual Bases for Efficient and Expansive Representation Learning

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic relationship between model capacity and the minimal number of visual tokens required to preserve image semantics. We introduce “visual semantic complexity” as the smallest basis size needed to span the semantic space of an image, and empirically observe that larger models substantially reduce the requisite token count. To this end, we propose a lightweight Adaptive Orthogonal Filtering (AOF) module that constructs compact, scalable semantic bases via orthogonal clustering of redundant tokens, coupled with dynamic basis selection guided by the Minimum Description Length principle. We validate a power-law scaling relationship between token count and model parameters across diverse ViT architectures. On benchmarks including ImageNet, our method retains ≥98% of full-token performance using ≤30% of the original tokens. Additionally, we release VLC-1M—the first high-quality, long visual-context modeling dataset—publicly available to the research community.

Technology Category

Application Category

📝 Abstract
This paper investigates the fundamental relationship between model capacity and the minimal number of visual tokens required to preserve image semantics. Inspired by the Minimum Description Length principle, we reinterpret image tokens as vectors in a visual semantic space and define the intrinsic semantic complexity of an image as the smallest set of basis vectors needed to span this space. Building on this perspective, we propose Orthogonal Filtering, a lightweight module that adaptively clusters redundant tokens into a compact set of orthogonal bases. Through extensive experiments across a range of ViT models, we reveal a consistent token, model scaling law: larger models require significantly fewer tokens to span visual semantic space. Besides, we also contribute a visual long-context dataset.
Problem

Research questions and friction points this paper is trying to address.

Investigates model capacity and minimal visual tokens
Proposes adaptive token clustering into orthogonal bases
Reveals scaling law where larger models need fewer tokens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal Filtering adaptively clusters redundant tokens
Defines intrinsic semantic complexity via basis vectors
Larger models require fewer tokens for semantic span
🔎 Similar Papers
No similar papers found.
S
Shawn Young
Faculty of Computer Science and Control Engineering, Shenzhen University of Advanced Technology, Shenzhen, China
Xingyu Zeng
Xingyu Zeng
Shenzhen University of Advanced Technology
Computer VisionDeep Learning
L
Lijian Xu
Faculty of Computer Science and Control Engineering, Shenzhen University of Advanced Technology, Shenzhen, China