Mixture-of-Visual-Thoughts: Exploring Context-Adaptive Reasoning Mode Selection for General Visual Reasoning

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual reasoning methods are typically designed for specific reasoning patterns, limiting their generalizability across diverse tasks and domains. To address this, we propose Mixture-of-Visual-Thoughts (MoVT), the first unified visual reasoning framework supporting context-aware, multi-modal adaptive switching. Our approach introduces two key innovations: (1) a two-stage learning mechanism that jointly models heterogeneous reasoning patterns and dynamically selects the most suitable one per input; and (2) AdaGRPO—a novel algorithm integrating supervised cold-start initialization with reinforcement learning to optimize pattern selection policies. Evaluated across multiple visual reasoning benchmarks—including compositional VQA, spatial reasoning, and counterfactual QA—MoVT achieves significant improvements in both performance stability and cross-scenario generalization. Empirical results demonstrate MoVT’s ability to effectively identify, learn, and contextually invoke appropriate reasoning paths, establishing a new paradigm for adaptive, interpretable visual reasoning.

Technology Category

Application Category

📝 Abstract
Current visual reasoning methods mainly focus on exploring specific reasoning modes. Although improvements can be achieved in particular domains, they struggle to develop general reasoning capabilities. Inspired by this, we propose a novel adaptive reasoning paradigm, Mixture-of-Visual-Thoughts (MoVT), which unifies different reasoning modes within a single model and guides it to select the appropriate mode based on context. To achieve this, we introduce AdaVaR, a two-stage Adaptive Visual Reasoning learning framework: different modes are unified and learned during the supervised cold-start stage, and the mode selection capability is induced via an RL process with a carefully designed AdaGRPO algorithm. Extensive experiments show that AdaVaR effectively guides the model to learn and differentiate multiple modes and perform context-adaptive mode selection, achieving consistent improvement across various scenarios, highlighting MoVT as an effective solution for building general visual reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Unifying multiple reasoning modes in visual reasoning
Enabling context-adaptive selection of reasoning modes
Developing general visual reasoning capabilities across scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies multiple reasoning modes in single model
Uses two-stage learning framework for adaptation
Performs context-adaptive mode selection via RL
🔎 Similar Papers
No similar papers found.
Zejun Li
Zejun Li
Fudan University
vision-languagemulti-modality
Y
Yingxiu Zhao
Taobao & Tmall Group of Alibaba
Jiwen Zhang
Jiwen Zhang
Fudan University
multimodal learningrobotics
S
Siyuan Wang
University of Southern California
Y
Yang Yao
Taobao & Tmall Group of Alibaba
R
Runzhou Zhao
Taobao & Tmall Group of Alibaba
Jun Song
Jun Song
Shenzhen University
nanophotonics
B
Bo Zheng
Taobao & Tmall Group of Alibaba
Z
Zhongyu Wei
Fudan University