🤖 AI Summary
Existing multimodal large language models (MLLMs) confine reasoning to the linguistic space, treating visual inputs as static premises—thus impeding fine-grained, dynamic reasoning within the visual embedding space. To address this, we propose Latent Visual Reasoning (LVR), the first framework to extend autoregressive reasoning to the visual token level: it employs the language model to generate latent representations of salient visual tokens, enabling generative visual reasoning in a unified semantic space. Our method integrates a vision encoder, latent-state reconstruction training, and GRPO-based reinforcement learning to jointly optimize visual latent modeling and text generation. Evaluated on the perception-intensive visual question answering benchmark MMVP, LVR achieves 71.67% accuracy—significantly surpassing Qwen2.5-VL (66.67%)—demonstrating the efficacy and advancement of cross-modal latent-variable reasoning.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved notable gains in various tasks by incorporating Chain-of-Thought (CoT) reasoning in language spaces. Recent work extends this direction by leveraging external tools for visual editing, thereby enhancing the visual signal along the reasoning trajectories. Nevertheless, these approaches remain fundamentally constrained: reasoning is still confined to the language space, with visual information treated as static preconditions. We introduce Latent Visual Reasoning (LVR), a new paradigm that enables autoregressive reasoning directly in the visual embedding space. A visual encoder first projects images into visual tokens within a joint semantic space shared with the language model. The language model is then trained to generate latent states that reconstruct key visual tokens critical for answering the query, constituting the process of latent visual reasoning. By interleaving LVR with standard text generation, our model achieves substantial gains on perception-intensive visual question answering tasks. In addition, we adapt the GRPO algorithm to conduct reinforcement learning on latent reasoning, further balancing LVR and textual generation. We show that LVR substantially improves fine-grained visual understanding and perception, achieving 71.67% on MMVP compared to 66.67% with Qwen2.5-VL. Code base and model weights will be released later.