MMC: Iterative Refinement of VLM Reasoning via MCTS-based Multimodal Critique

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual language models (VLMs) suffer from hallucination and lack built-in mechanisms for error detection and correction during multimodal reasoning. To address this, we propose a Monte Carlo Tree Search (MCTS)-driven multimodal actor-critic framework that requires no human annotation. Our approach introduces three key innovations: (1) the first multimodal automatic critique mechanism, which generates divergent, comparative feedback via dual-path generation; (2) a self-supervised critique data synthesis method that constructs high-quality correction signals directly from model outputs; and (3) the integration of MCTS for search and iterative refinement of multimodal reasoning paths. Evaluated on multiple public benchmarks, our method significantly improves complex reasoning accuracy across mainstream VLMs and demonstrates strong cross-model generalization. It establishes a new paradigm for reducing reliance on human annotations while enhancing VLM robustness and reliability in multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Visual language models (VLMs) have demonstrated strong performance across diverse multimodal reasoning tasks but still face challenges such as hallucinations, resulting in incorrect reasoning outcomes. Inspired by recent research on external feedback mechanisms in large language models (LLMs), we propose a multimodal actor-critic framework to enhance VLM reasoning capabilities. Specifically, the actor model generates step-by-step reasoning paths based on image and text inputs, while the critic model evaluates these reasoning paths and provides corrective feedback. The actor model iteratively refines its reasoning based on the feedback until the reasoning outcome is deemed satisfactory by the critic model. To reduce reliance on costly manual annotations, we introduce an automated method for constructing multimodal critique datasets. By leveraging Monte Carlo Tree Search (MCTS), we systematically guide the actor model to explore diverse reasoning paths. To obtain critique data for correcting erroneous reasoning steps, we prompt an annotator model to compare pairs of reasoning paths diverging from a shared ancestor node - one leading to a correct conclusion and the other to an incorrect one. This approach enables us to construct the MMC (MCTS-based Multimodal Critique) dataset, upon which we further develop a comprehensive training and inference pipeline. Extensive experiments conducted on several public benchmark datasets and mainstream VLMs demonstrate that our approach significantly improves the performance of VLM on complex multimodal reasoning tasks, underscoring its effectiveness and wide applicability.
Problem

Research questions and friction points this paper is trying to address.

Reduces VLM hallucinations in multimodal reasoning tasks
Automates critique dataset construction for feedback
Enhances reasoning via MCTS-guided iterative refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal actor-critic framework for VLM reasoning
Automated MCTS-based critique dataset construction
Iterative refinement via corrective feedback loop
🔎 Similar Papers
No similar papers found.
S
Shuhang Liu
NERC-SLIP, University of Science and Technology of China
Z
Zhenrong Zhang
NERC-SLIP, University of Science and Technology of China
P
Pengfei Hu
NERC-SLIP, University of Science and Technology of China
Jiefeng Ma
Jiefeng Ma
USTC
NLP、Language Modelling、Document Intelligence
J
Jun Du
NERC-SLIP, University of Science and Technology of China
Q
Qing Wang
NERC-SLIP, University of Science and Technology of China
J
Jianshu Zhang
IFLYTEK Research
Q
Quan Liu
IFLYTEK Research
J
Jianqing Gao
IFLYTEK Research
F
Feng Ma
IFLYTEK Research