š¤ AI Summary
This work investigates whether multimodal large language models (MLLMs) truly āunderstand what they see,ā proposing the āVisual Roomā theory and introducing the first systematic benchmark for evaluating perceptionācognition alignment, comprising three hierarchical levels, 17 tasks, and 350 samples. Methodologically, we design a six-stage progressive multimodal question-answering evaluation suite that integrates attribute recognition, scene understanding, textual entailment, and causal/social reasoningārigorously disentangling low-level perceptual capabilities from high-level cognitive reasoning. Key contributions include: (1) establishing a falsifiable theoretical framework for the hypothesis āseeing ā understandingā; (2) empirically demonstrating a pervasive perceptionācognition gap across MLLMs, with perception outperforming cognition by an average of +8.0%; (3) revealing that cognitive capability consistently improves with model scale, whereas perceptual performance lacks stable parameter-scale dependenceāuncovering a fundamental decoupling mechanism between perception and cognition in MLLMs.
š Abstract
Can multi-modal large language models (MLLMs) truly understand what they can see? Extending Searle's Chinese Room into the multi-modal domain, this paper proposes the Visual Room argument: MLLMs may describe every visual detail precisely yet fail to comprehend the underlying emotions and intentions, namely seeing is not understanding. Building on this, we introduce extit{Visual Room} 2.0, a hierarchical benchmark for evaluating perception-cognition alignment of MLLMs. We model human perceptive and cognitive processes across three levels: low, middle, and high, covering 17 representative tasks. The perception component ranges from attribute recognition to scene understanding, while the cognition component extends from textual entailment to causal and social reasoning. The dataset contains 350 multi-modal samples, each with six progressive questions (2,100 in total) spanning perception to cognition. Evaluating 10 state-of-the-art (SoTA) MLLMs, we highlight three key findings: (1) MLLMs exhibit stronger perceptual competence than cognitive ability (8.0%$uparrow$); (2) cognition appears not causally dependent on perception-based reasoning; and (3) cognition scales with model size, but perception does not consistently improve with larger variants. This work operationalizes Seeing $
e$ Understanding as a testable hypothesis, offering a new paradigm from perceptual processing to cognitive reasoning in MLLMs. Our dataset is available at https://huggingface.co/datasets/LHK2003/PCBench.