Visual Room 2.0: Seeing is Not Understanding for MLLMs

šŸ“… 2025-11-16
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work investigates whether multimodal large language models (MLLMs) truly ā€œunderstand what they see,ā€ proposing the ā€œVisual Roomā€ theory and introducing the first systematic benchmark for evaluating perception–cognition alignment, comprising three hierarchical levels, 17 tasks, and 350 samples. Methodologically, we design a six-stage progressive multimodal question-answering evaluation suite that integrates attribute recognition, scene understanding, textual entailment, and causal/social reasoning—rigorously disentangling low-level perceptual capabilities from high-level cognitive reasoning. Key contributions include: (1) establishing a falsifiable theoretical framework for the hypothesis ā€œseeing ≠ understandingā€; (2) empirically demonstrating a pervasive perception–cognition gap across MLLMs, with perception outperforming cognition by an average of +8.0%; (3) revealing that cognitive capability consistently improves with model scale, whereas perceptual performance lacks stable parameter-scale dependence—uncovering a fundamental decoupling mechanism between perception and cognition in MLLMs.

Technology Category

Application Category

šŸ“ Abstract
Can multi-modal large language models (MLLMs) truly understand what they can see? Extending Searle's Chinese Room into the multi-modal domain, this paper proposes the Visual Room argument: MLLMs may describe every visual detail precisely yet fail to comprehend the underlying emotions and intentions, namely seeing is not understanding. Building on this, we introduce extit{Visual Room} 2.0, a hierarchical benchmark for evaluating perception-cognition alignment of MLLMs. We model human perceptive and cognitive processes across three levels: low, middle, and high, covering 17 representative tasks. The perception component ranges from attribute recognition to scene understanding, while the cognition component extends from textual entailment to causal and social reasoning. The dataset contains 350 multi-modal samples, each with six progressive questions (2,100 in total) spanning perception to cognition. Evaluating 10 state-of-the-art (SoTA) MLLMs, we highlight three key findings: (1) MLLMs exhibit stronger perceptual competence than cognitive ability (8.0%$uparrow$); (2) cognition appears not causally dependent on perception-based reasoning; and (3) cognition scales with model size, but perception does not consistently improve with larger variants. This work operationalizes Seeing $ e$ Understanding as a testable hypothesis, offering a new paradigm from perceptual processing to cognitive reasoning in MLLMs. Our dataset is available at https://huggingface.co/datasets/LHK2003/PCBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating whether MLLMs truly understand visual content beyond mere description
Assessing perception-cognition alignment across 17 tasks from basic recognition to complex reasoning
Testing if seeing equals understanding through hierarchical multi-modal benchmark analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes hierarchical benchmark for perception-cognition alignment
Models human processes across three levels with 17 tasks
Evaluates 10 MLLMs using 2100 progressive multi-modal questions
šŸ”Ž Similar Papers