🤖 AI Summary
This work addresses the limited spatial reasoning and visual alignment capabilities of vision-language models (VLMs). To this end, we introduce iVISPAR—the first interactive multimodal spatial reasoning benchmark—built upon a slider-puzzle variant that supports multi-step logical planning and spatial perception evaluation under 2D/3D visual and textual inputs. Our method innovatively designs an evaluation framework with action-feedback mechanisms and incorporates human baselines alongside optimal-path solutions. This enables the first systematic quantification of cognitive gaps in VLMs’ visual alignment and complex spatial planning. Experimental results show that state-of-the-art VLMs achieve only moderate performance on simple tasks and fall significantly short of human-level performance on complex configurations. Moreover, models exhibit superior performance with 2D visual input compared to 3D or textual inputs, confirming that visual alignment remains a fundamental bottleneck in current VLM architectures.
📝 Abstract
Vision-Language Models (VLMs) are known to struggle with spatial reasoning and visual alignment. To help overcome these limitations, we introduce iVISPAR, an interactive multi-modal benchmark designed to evaluate the spatial reasoning capabilities of VLMs acting as agents. iVISPAR is based on a variant of the sliding tile puzzle-a classic problem that demands logical planning, spatial awareness, and multi-step reasoning. The benchmark supports visual 2D, 3D, and text-based input modalities, enabling comprehensive assessments of VLMs' planning and reasoning skills. We evaluate a broad suite of state-of-the-art open-source and closed-source VLMs, comparing their performance while also providing optimal path solutions and a human baseline to assess the task's complexity and feasibility for humans. Results indicate that while some VLMs perform well on simple spatial tasks, they encounter difficulties with more complex configurations and problem properties. Notably, while VLMs generally perform better in 2D vision compared to 3D or text-based representations, they consistently fall short of human performance, illustrating the persistent challenge of visual alignment. This highlights critical gaps in current VLM capabilities, highlighting their limitations in achieving human-level cognition.