🤖 AI Summary
Current multimodal large language models (MLLMs) rely heavily on large-scale human-annotated image-text pairs, severely limiting the development of deep visual reasoning capabilities. To address this, we propose RRVF—a novel framework enabling end-to-end unsupervised visual reasoning directly from raw images. RRVF establishes a closed-loop iterative process—“reasoning → rendering → visual feedback”—and leverages the asymmetry that “verification is easier than generation” to design a self-correcting reinforcement learning mechanism. It jointly optimizes a multimodal LLM with the Generalized Reinforcement Learning with Policy Optimization (GRPO) algorithm and incorporates tool-use capabilities to support multi-turn interactive reasoning. Evaluated on image-to-code generation tasks for charts and web interfaces, RRVF substantially outperforms leading open-source MLLMs and supervised fine-tuning baselines. Our results empirically validate the efficacy of pure visual feedback as a supervisory signal and establish a new paradigm for unsupervised visual reasoning.
📝 Abstract
Multimodal Large Language Models (MLLMs) have exhibited impressive performance across various visual tasks. Subsequent investigations into enhancing their visual reasoning abilities have significantly expanded their performance envelope. However, a critical bottleneck in the advancement of MLLMs toward deep visual reasoning is their heavy reliance on curated image-text supervision. To solve this problem, we introduce a novel framework termed ``Reasoning-Rendering-Visual-Feedback'' (RRVF), which enables MLLMs to learn complex visual reasoning from only raw images. This framework builds on the ``Asymmetry of Verification'' principle to train MLLMs, i.e., verifying the rendered output against a source image is easier than generating it. We demonstrate that this relative ease provides an ideal reward signal for optimization via Reinforcement Learning (RL) training, reducing the reliance on the image-text supervision. Guided by the above principle, RRVF implements a closed-loop iterative process encompassing reasoning, rendering, and visual feedback components, enabling the model to perform self-correction through multi-turn interactions and tool invocation, while this pipeline can be optimized by the GRPO algorithm in an end-to-end manner. Extensive experiments on image-to-code generation for data charts and web interfaces show that RRVF substantially outperforms existing open-source MLLMs and surpasses supervised fine-tuning baselines. Our findings demonstrate that systems driven by purely visual feedback present a viable path toward more robust and generalizable reasoning models without requiring explicit supervision. Code will be available at https://github.com/L-O-I/RRVF.