🤖 AI Summary
Large vision-language models (VLMs) lack rigorous evaluation of geometric perception—particularly depth and height understanding—despite their growing deployment in spatially grounded applications. Method: We introduce GeoMeter, the first multi-dimensional benchmark dedicated to geometric reasoning, featuring controlled 2D/3D generation scenarios and integrating human annotations with diverse evaluation paradigms: multi-turn consistency QA, relative ranking, and counterfactual reasoning. Contribution/Results: Systematic evaluation across 18 state-of-the-art VLMs reveals a stark performance gap: average accuracy on depth/height reasoning is only 57.6%, substantially below shape/size recognition. This exposes critical capability limitations and dataset biases. GeoMeter enables fine-grained, quantifiable assessment of VLMs’ geometric perception for the first time, filling a fundamental gap in visual geometric understanding evaluation and providing both a standardized benchmark and a diagnostic framework for robust visual reasoning research.
📝 Abstract
Geometric understanding - including depth and height perception - is fundamental to intelligence and crucial for navigating our environment. Despite the impressive capabilities of large Vision Language Models (VLMs), it remains unclear how well they possess the geometric understanding required for practical applications in visual perception. In this work, we focus on evaluating the geometric understanding of these models, specifically targeting their ability to perceive the depth and height of objects in an image. To address this, we introduce GeoMeter, a suite of benchmark datasets - encompassing 2D and 3D scenarios - to rigorously evaluate these aspects. By benchmarking 18 state-of-the-art VLMs, we found that although they excel in perceiving basic geometric properties like shape and size, they consistently struggle with depth and height perception. Our analysis reveal that these challenges stem from shortcomings in their depth and height reasoning capabilities and inherent biases. This study aims to pave the way for developing VLMs with enhanced geometric understanding by emphasizing depth and height perception as critical components necessary for real-world applications.