🤖 AI Summary
Vision models (e.g., ViT) exhibit significantly weaker reasoning and in-context learning capabilities compared to language models, primarily because conventional patchification lacks semantic structure, hindering global contextual modeling and faithful representation of data distributions.
Method: This paper introduces the “visual vocabulary” paradigm, treating image objects as fundamental semantic units and proposing **object-level masked image modeling**—the first semantic grounding pretraining objective operating at the object level. Leveraging MLLM-driven multi-granularity supervision, it explicitly models inter-object semantic relationships and contextual dependencies.
Contribution/Results: The approach achieves substantial performance gains on visual reasoning benchmarks—including VQA, GQA, and ScienceQA—demonstrating that object-level semantic modeling effectively enhances global understanding and reasoning capacity. It establishes a novel paradigm for building semantic-aware visual encoders, bridging a critical gap between vision and language modeling principles.
📝 Abstract
Recent advances in language modeling have witnessed the rise of highly desirable emergent capabilities, such as reasoning and in-context learning. However, vision models have yet to exhibit comparable progress in these areas. In this paper, we argue that this gap could stem from the lack of semantic and contextual guidance in current vision transformer (ViT) training schemes, and such a gap can be narrowed through the design of a semantic-grounded objective. Specifically, we notice that individual words in natural language are inherently semantic, and modeling directly on word tokens naturally learns a realistic distribution. In contrast, ViTs rely on spatial patchification, which inevitably lacks semantic information. To bridge this gap, we propose to directly model "object" as the visual equivalence of "word," pushing the model to learn the global context and semantics among visual elements. We investigate our hypotheses via masked image modeling (MIM), a framework where our approach can be readily tested by applying masks to visual objects rather than random patches. Considerable evidence from qualitative and quantitative evaluations reveals a key finding: object-level representation alone helps to learn a real-world distribution, whereas pixel-averaging shortcuts are often learned without it. Moreover, further evaluations with multimodal LLMs (MLLM) on visual question answering (VQA, GQA, ScienceQA) tasks demonstrate the strong reasoning and contextual understanding gained with this simple objective. We hope our study highlights the effectiveness of object-level encoding and provides a plausible direction for developing stronger vision encoders and tokenizers. Code and model will be publicly released. Keywords: Semantic Visual Tokenizer, Vision Reasoning, In-context Learning, Multimodal Reasoning