🤖 AI Summary
Current vision foundation models excel in semantic understanding but lack robust reasoning about spatial relationships between objects, limiting their applicability to tasks such as embodied intelligence. To address this gap, this work proposes SpaRRTa—a procedurally generated synthetic benchmark that, for the first time, focuses on recognizing relative object positions, a task more aligned with human spatial cognition, rather than relying on metric-accurate 3D predictions. By combining controllable layouts with photorealistic rendering, SpaRRTa enables systematic evaluation of spatial reasoning capabilities in mainstream vision foundation models. Our assessment reveals significant performance disparities among these models, offering empirical insights into their spatial perception mechanisms and informing future improvements in this critical domain.
📝 Abstract
Visual Foundation Models (VFMs), such as DINO and CLIP, excel in semantic understanding of images but exhibit limited spatial reasoning capabilities, which limits their applicability to embodied systems. As a result, recent work incorporates some 3D tasks (such as depth estimation) into VFM training. However, VFM performance remains inconsistent across other spatial tasks, raising the question of whether these models truly have spatial awareness or overfit to specific 3D objectives. To address this question, we introduce the Spatial Relation Recognition Task (SpaRRTa) benchmark, which evaluates the ability of VFMs to identify relative positions of objects in the image. Unlike traditional 3D objectives that focus on precise metric prediction (e.g., surface normal estimation), SpaRRTa probes a fundamental capability underpinning more advanced forms of human-like spatial understanding. SpaRRTa generates an arbitrary number of photorealistic images with diverse scenes and fully controllable object arrangements, along with freely accessible spatial annotations. Evaluating a range of state-of-the-art VFMs, we reveal significant disparities between their spatial reasoning abilities. Through our analysis, we provide insights into the mechanisms that support or hinder spatial awareness in modern VFMs. We hope that SpaRRTa will serve as a useful tool for guiding the development of future spatially aware visual models.