MM-THEBench: Do Reasoning MLLMs Think Reasonably?

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks struggle to effectively assess hallucinations in the intermediate reasoning chains of multimodal large language models (MLLMs), often overlooking the plausibility of their reasoning processes and perceptual inaccuracies. To address this gap, this work proposes MM-THEBench, the first fine-grained hallucination benchmark tailored for reasoning-oriented MLLMs. Grounded in cognitive science, it introduces a structured hallucination taxonomy, human-verified multimodal reasoning annotations, and a multi-level automated evaluation framework. Experiments on mainstream reasoning MLLMs demonstrate that MM-THEBench effectively uncovers the mechanisms underlying hallucination generation within chain-of-thought reasoning and significantly enhances the diagnostic capability for model reasoning plausibility and interpretability.

Technology Category

Application Category

📝 Abstract
Recent advances in multimodal large language models (MLLMs) mark a shift from non-thinking models to post-trained reasoning models capable of solving complex problems through thinking. However, whether such thinking mitigates hallucinations in multimodal perception and reasoning remains unclear. Self-reflective reasoning enhances robustness but introduces additional hallucinations, and subtle perceptual errors still result in incorrect or coincidentally correct answers. Existing benchmarks primarily focus on models before the emergence of reasoning MLLMs, neglecting the internal thinking process and failing to measure the hallucinations that occur during thinking. To address these challenges, we introduce MM-THEBench, a comprehensive benchmark for assessing hallucinations of intermediate CoTs in reasoning MLLMs. MM-THEBench features a fine-grained taxonomy grounded in cognitive dimensions, diverse data with verified reasoning annotations, and a multi-level automated evaluation framework. Extensive experiments on mainstream reasoning MLLMs reveal insights into how thinking affects hallucination and reasoning capability in various multimodal tasks.
Problem

Research questions and friction points this paper is trying to address.

hallucination
multimodal large language models
reasoning
chain-of-thought
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning MLLMs
hallucination evaluation
Chain-of-Thought (CoT)
multimodal benchmark
cognitive taxonomy