🤖 AI Summary
Existing multimodal large language models (MLLMs) struggle to unify visual understanding and generation within a single framework. To address this, we propose V-AR-MLLM—the first vision autoregressive MLLM—that jointly models visual understanding (next-token prediction) and hierarchical image generation (next-scale prediction) in a unified architecture. Methodologically, we introduce a novel scale-level autoregressive visual generation paradigm, integrating scale-wise token sequence modeling with a three-stage collaborative training strategy: unified pretraining, cross-modal alignment, and hybrid visual instruction tuning—enabling end-to-end joint optimization of understanding and generation. Built upon the LLaVA architecture, V-AR-MLLM supports multimodal input/output and instruction-driven high-fidelity image synthesis. Experiments demonstrate that V-AR-MLLM outperforms LLaVA-1.5 on visual question answering and visual reasoning benchmarks, while natively supporting controllable, high-fidelity autoregressive image generation and significantly enhancing multimodal instruction generalization capability.
📝 Abstract
We present VARGPT, a novel multimodal large language model (MLLM) that unifies visual understanding and generation within a single autoregressive framework. VARGPT employs a next-token prediction paradigm for visual understanding and a next-scale prediction paradigm for visual autoregressive generation. VARGPT innovatively extends the LLaVA architecture, achieving efficient scale-wise autoregressive visual generation within MLLMs while seamlessly accommodating mixed-modal input and output within a single model framework. Our VARGPT undergoes a three-stage unified training process on specially curated datasets, comprising a pre-training phase and two mixed visual instruction-tuning phases. The unified training strategy are designed to achieve alignment between visual and textual features, enhance instruction following for both understanding and generation, and improve visual generation quality, respectively. Despite its LLAVA-based architecture for multimodel understanding, VARGPT significantly outperforms LLaVA-1.5 across various vision-centric benchmarks, such as visual question-answering and reasoning tasks. Notably, VARGPT naturally supports capabilities in autoregressive visual generation and instruction-to-image synthesis, showcasing its versatility in both visual understanding and generation tasks. Project page is at: url{https://vargpt-1.github.io/}