🤖 AI Summary
Existing multimodal benchmarks inadequately distinguish perceptual hallucinations from reasoning hallucinations in Multimodal Large Language Models (MLLMs), hindering precise diagnosis of multimodal reasoning failures. To address this, we introduce MIRAGE—the first benchmark explicitly designed to isolate reasoning hallucinations—by rigorously ensuring image perception correctness while deliberately injecting only logical errors. We propose novel multi-granularity evaluation metrics (accuracy, factuality, and LLM-hallucination score) and, for the first time, systematically uncover correlations among model scale, training stage, and hallucination type. Furthermore, we introduce curriculum-based reinforcement fine-tuning and collaborative prompting for reasoning. Experiments demonstrate that our methods significantly reduce logical hallucinations in base MLLMs; reveal that current MLLMs exhibit no substantial improvement in spatial relational reasoning; and establish the first reproducible baseline on MIRAGE.
📝 Abstract
Multimodal hallucination in multimodal large language models (MLLMs) restricts the correctness of MLLMs. However, multimodal hallucinations are multi-sourced and arise from diverse causes. Existing benchmarks fail to adequately distinguish between perception-induced hallucinations and reasoning-induced hallucinations. This failure constitutes a significant issue and hinders the diagnosis of multimodal reasoning failures within MLLMs. To address this, we propose the {dataset} benchmark, which isolates reasoning hallucinations by constructing questions where input images are correctly perceived by MLLMs yet reasoning errors persist. {dataset} introduces multi-granular evaluation metrics: accuracy, factuality, and LLMs hallucination score for hallucination quantification. Our analysis reveals that (1) the model scale, data scale, and training stages significantly affect the degree of logical, fabrication, and factual hallucinations; (2) current MLLMs show no effective improvement on spatial hallucinations caused by misinterpreted spatial relationships, indicating their limited visual reasoning capabilities; and (3) question types correlate with distinct hallucination patterns, highlighting targeted challenges and potential mitigation strategies. To address these challenges, we propose {method}, a method that combines curriculum reinforcement fine-tuning to encourage models to generate logic-consistent reasoning chains by stepwise reducing learning difficulty, and collaborative hint inference to reduce reasoning complexity. {method} establishes a baseline on {dataset}, and reduces the logical hallucinations in original base models.