PulseCheck457: A Diagnostic Benchmark for Comprehensive Spatial Reasoning of Large Multimodal Models

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks predominantly evaluate 2D visual understanding and lack systematic assessment of six-degree-of-freedom (6D) spatial reasoning. Method: We introduce PulseCheck457, the first synthetic diagnostic benchmark for 6D spatial reasoning, covering four core capabilities—multi-object recognition, 2D/3D localization, and 3D pose estimation—organized into seven question types and five difficulty levels. We propose a novel 6D reasoning evaluation framework featuring the Relative Performance Degradation Rate (RPDR) to quantify 3D reasoning decay, alongside unbiased attribute control, cross-difficulty modeling, and bias attribution analysis. Contribution/Results: Experiments reveal significant performance degradation on 6D tasks; RPDR identifies 3D localization and orientation estimation as critical bottlenecks; and bias patterns—particularly in 3D position and pose prediction—are consistent across synthetic and real-world images, exposing systemic limitations of current large multimodal models.

Technology Category

Application Category

📝 Abstract
Although large multimodal models (LMMs) have demonstrated remarkable capabilities in visual scene interpretation and reasoning, their capacity for complex and precise 3-dimensional spatial reasoning remains uncertain. Existing benchmarks focus predominantly on 2D spatial understanding and lack a framework to comprehensively evaluate 6D spatial reasoning across varying complexities. To address this limitation, we present PulseCheck457, a scalable and unbiased synthetic dataset designed with 4 key capability for spatial reasoning: multi-object recognition, 2D location, 3D location, and 3D orientation. We develop a cascading evaluation structure, constructing 7 question types across 5 difficulty levels that range from basic single object recognition to our new proposed complex 6D spatial reasoning tasks. We evaluated various large multimodal models (LMMs) on PulseCheck457, observing a general decline in performance as task complexity increases, particularly in 3D reasoning and 6D spatial tasks. To quantify these challenges, we introduce the Relative Performance Dropping Rate (RPDR), highlighting key weaknesses in 3D reasoning capabilities. Leveraging the unbiased attribute design of our dataset, we also uncover prediction biases across different attributes, with similar patterns observed in real-world image settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluate 6D spatial reasoning in LMMs
Develop scalable dataset for spatial tasks
Identify 3D reasoning weaknesses in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic dataset design
cascading evaluation structure
Relative Performance Dropping Rate
🔎 Similar Papers
No similar papers found.