🤖 AI Summary
This study investigates the alignment between multimodal large language models (MLLMs) and human cognitive capabilities across four reasoning domains—visual, definitional, analogical, and logical.
Method: We introduce MMR-Bench, the first fine-grained human–machine alignment benchmark for multimodal reasoning, comprising 9,794 bilingual (Chinese–English) items spanning both multimodal and text-only reasoning tasks. It features systematic human response success rates and annotations of prevalent incorrect answer choices. Our methodology integrates multimodal data curation, bilingual item construction, human cognitive behavior analysis, and a cross-model normalized evaluation framework.
Contribution/Results: Experiments reveal that state-of-the-art MLLMs underperform humans by 32.7% on average in analogical and logical reasoning, with error patterns significantly deviating from human cognitive regularities. By moving beyond black-box evaluation, MMR-Bench establishes a novel, interpretable benchmark and methodological foundation for cognitive alignment modeling and explainable AI assessment.
📝 Abstract
The goal of achieving Artificial General Intelligence (AGI) is to imitate humans and surpass them. Models such as OpenAI's o1, o3, and DeepSeek's R1 have demonstrated that large language models (LLMs) with human-like reasoning capabilities exhibit exceptional performance and are being gradually integrated into multimodal large language models (MLLMs). However, whether these models possess capabilities comparable to humans in handling reasoning tasks remains unclear at present. In this paper, we propose Human-Aligned Bench, a benchmark for fine-grained alignment of multimodal reasoning with human performance. Specifically, we collected 9,794 multimodal questions that solely rely on contextual reasoning, including bilingual (Chinese and English) multimodal questions and pure text-based questions, encompassing four question types: visual reasoning, definition judgment, analogical reasoning, and logical judgment. More importantly, each question is accompanied by human success rates and options that humans are prone to choosing incorrectly. Extensive experiments on the Human-Aligned Bench reveal notable differences between the performance of current MLLMs in multimodal reasoning and human performance. The findings on our benchmark provide insights into the development of the next-generation models.