Think 360°: Evaluating the Width-centric Reasoning Capability of MLLMs Beyond Depth

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in current multimodal large language models (MLLMs), which predominantly emphasize deep chain-of-thought reasoning in visual question answering while neglecting the evaluation of reasoning breadth—such as multi-path exploration and constraint-aware optimization. To bridge this gap, we introduce and formally quantify the concept of “reasoning width,” presenting a comprehensive benchmark comprising over 1,200 high-quality, heterogeneous multimodal samples. Coupled with a fine-grained Tree-of-Thought protocol, our framework systematically evaluates MLLMs’ ability to perform synergistic wide-and-deep reasoning through parallel path traversal and multi-constraint pruning. Extensive evaluations across more than 30 state-of-the-art MLLMs reveal significant deficiencies in their capacity to effectively integrate deep reasoning with broad exploratory strategies.

Technology Category

Application Category

📝 Abstract
In this paper, we present a holistic multimodal benchmark that evaluates the reasoning capabilities of MLLMs with an explicit focus on reasoning width, a complementary dimension to the more commonly studied reasoning depth. Specifically, reasoning depth measures the model's ability to carry out long-chain, sequential reasoning in which each step is tightly and rigorously linked to the next. Reasoning width tends to focus more on the model's capacity for broad trial-and-error search or multi-constrained optimization: it must systematically traverse many possible and parallelized reasoning paths, apply diverse constraints to prune unpromising branches, and identify valid solution routes for efficient iteration or backtracking. To achieve it, we carefully curate 1200+ high-quality multimodal cases spanning heterogeneous domains, and propose a fine-grained tree-of-thought evaluation protocol that jointly quantifies reasoning width and depth. We evaluate 12 major model families (over 30 advanced MLLMs) across difficulty tiers, question types, and required skills. Results show that while current models exhibit strong performance on general or common-sense VQA tasks, they still struggle to combine deep sequential thought chains with wide exploratory search to perform genuine insight-based reasoning. Finally, we analyze characteristic failure modes to provide possible directions for building MLLMs that reason not only deeper but also wider.
Problem

Research questions and friction points this paper is trying to address.

reasoning width
multimodal large language models
reasoning depth
insight-based reasoning
broad trial-and-error search
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning width
multimodal benchmark
tree-of-thought evaluation
MLLMs
broad exploratory reasoning
🔎 Similar Papers
No similar papers found.
Mingrui Chen
Mingrui Chen
Institute of Automation, Chinese Academy of Sciences
Computer VisionFoundation Models
H
Hexiong Yang
NLPR&MAIS, Institute of Automation, Chinese Academy of Sciences; School of Advanced Interdisciplinary Science, University of Chinese Academy of Sciences
Haogeng Liu
Haogeng Liu
Tiktok
Machine LearningGenerative ModelMultimodal Large Language Model
Huaibo Huang
Huaibo Huang
NLPR, MAIS, CASIA
Computer VisionGenerative ModelsLow-level VisionFace Recognition
R
Ran He
School of Artificial Intelligence, University of Chinese Academy of Sciences; NLPR&MAIS, Institute of Automation, Chinese Academy of Sciences; Zhongguancun Academy