🤖 AI Summary
Visual language models (VLMs) suffer from hallucination and lack built-in mechanisms for error detection and correction during multimodal reasoning. To address this, we propose a Monte Carlo Tree Search (MCTS)-driven multimodal actor-critic framework that requires no human annotation. Our approach introduces three key innovations: (1) the first multimodal automatic critique mechanism, which generates divergent, comparative feedback via dual-path generation; (2) a self-supervised critique data synthesis method that constructs high-quality correction signals directly from model outputs; and (3) the integration of MCTS for search and iterative refinement of multimodal reasoning paths. Evaluated on multiple public benchmarks, our method significantly improves complex reasoning accuracy across mainstream VLMs and demonstrates strong cross-model generalization. It establishes a new paradigm for reducing reliance on human annotations while enhancing VLM robustness and reliability in multimodal reasoning.
📝 Abstract
Visual language models (VLMs) have demonstrated strong performance across diverse multimodal reasoning tasks but still face challenges such as hallucinations, resulting in incorrect reasoning outcomes. Inspired by recent research on external feedback mechanisms in large language models (LLMs), we propose a multimodal actor-critic framework to enhance VLM reasoning capabilities. Specifically, the actor model generates step-by-step reasoning paths based on image and text inputs, while the critic model evaluates these reasoning paths and provides corrective feedback. The actor model iteratively refines its reasoning based on the feedback until the reasoning outcome is deemed satisfactory by the critic model. To reduce reliance on costly manual annotations, we introduce an automated method for constructing multimodal critique datasets. By leveraging Monte Carlo Tree Search (MCTS), we systematically guide the actor model to explore diverse reasoning paths. To obtain critique data for correcting erroneous reasoning steps, we prompt an annotator model to compare pairs of reasoning paths diverging from a shared ancestor node - one leading to a correct conclusion and the other to an incorrect one. This approach enables us to construct the MMC (MCTS-based Multimodal Critique) dataset, upon which we further develop a comprehensive training and inference pipeline. Extensive experiments conducted on several public benchmark datasets and mainstream VLMs demonstrate that our approach significantly improves the performance of VLM on complex multimodal reasoning tasks, underscoring its effectiveness and wide applicability.