Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large vision-language models (LVLMs) apply autoregressive supervision only to text, limiting their ability to leverage caption-free images, causing visual detail omission, and preventing modeling of purely visual content. Method: We propose Autoregressive Semantic Visual Reconstruction (ASVR), a unified autoregressive framework jointly modeling vision and language by reconstructing discrete semantic tokens—derived from images via a learned tokenizer—rather than raw pixels, enabling fine-grained visual understanding. Contribution/Results: We empirically demonstrate, for the first time, that semantic-level autoregressive visual reconstruction consistently improves VLM performance, whereas pixel-level reconstruction is ineffective or even detrimental. ASVR enables efficient mapping from continuous visual features to discrete semantic tokens. Compatible with mainstream architectures (e.g., LLaVA), it boosts LLaVA-1.5 by +5% on average across 14 multimodal benchmarks, exhibiting robustness across data scales (556K–2M samples) and diverse LLM backbones. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Typical large vision-language models (LVLMs) apply autoregressive supervision solely to textual sequences, without fully incorporating the visual modality into the learning process. This results in three key limitations: (1) an inability to utilize images without accompanying captions, (2) the risk that captions omit critical visual details, and (3) the challenge that certain vision-centric content cannot be adequately conveyed through text. As a result, current LVLMs often prioritize vision-to-language alignment while potentially overlooking fine-grained visual information. While some prior works have explored autoregressive image generation, effectively leveraging autoregressive visual supervision to enhance image understanding remains an open challenge. In this paper, we introduce Autoregressive Semantic Visual Reconstruction (ASVR), which enables joint learning of visual and textual modalities within a unified autoregressive framework. We show that autoregressively reconstructing the raw visual appearance of images does not enhance and may even impair multimodal understanding. In contrast, autoregressively reconstructing the semantic representation of images consistently improves comprehension. Notably, we find that even when models are given continuous image features as input, they can effectively reconstruct discrete semantic tokens, resulting in stable and consistent improvements across a wide range of multimodal understanding benchmarks. Our approach delivers significant performance gains across varying data scales (556k-2M) and types of LLM bacbones. Specifically, ASVR improves LLaVA-1.5 by 5% in average scores across 14 multimodal benchmarks. The code is available at https://github.com/AlenjandroWang/ASVR.
Problem

Research questions and friction points this paper is trying to address.

LVLMs lack visual modality in autoregressive learning
Captions may omit critical visual details
Vision-centric content inadequately conveyed through text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive semantic visual reconstruction for joint learning
Reconstructing discrete semantic tokens enhances understanding
Improves multimodal benchmarks with stable performance gains
🔎 Similar Papers
No similar papers found.
Dianyi Wang
Dianyi Wang
Fudan University&&Shanghai Innovation Institute
Multi-modal Learning
W
Wei Song
Shanghai Innovation Institute, Fudan University, AutoLab, Westlake University, Zhejiang University
Yikun Wang
Yikun Wang
fudan university
Computer vision | Natural language processing
S
Siyuan Wang
University of Southern California
Kaicheng Yu
Kaicheng Yu
Assistant Professor, Westlake University, PI of Autonomous Intelligence Lab
computer vision3D understandingautonomous perceptionautomatic machine learning
Z
Zhongyu Wei
Shanghai Innovation Institute, Fudan University
J
Jiaqi Wang
Shanghai Innovation Institute, Shanghai AI Lab