🤖 AI Summary
Existing spatial reasoning methods rely on unstructured representations (e.g., point clouds, voxels) with implicit coordinate encoding, limiting their capacity to model higher-order scene structure and thus impairing spatial reasoning performance. To address this, we propose a multimodal framework inspired by the human foveal–peripheral visual system: object-association tokens emulate foveal vision to capture discriminative object semantics, while egocentric grid representations emulate peripheral vision to explicitly encode global spatial layout. By synergistically integrating object-level attention and grid-based spatial encoding, our method jointly models fine-grained local details and holistic contextual structure, enabling structured, context-aware 3D scene understanding. Evaluated on multiple 3D scene understanding benchmarks, our approach achieves state-of-the-art performance, significantly advancing spatial relation reasoning and complex layout comprehension.
📝 Abstract
We present a central-peripheral vision-inspired framework (CVP), a simple yet effective multimodal model for spatial reasoning that draws inspiration from the two types of human visual fields -- central vision and peripheral vision. Existing approaches primarily rely on unstructured representations, such as point clouds, voxels, or patch features, and inject scene context implicitly via coordinate embeddings. However, this often results in limited spatial reasoning capabilities due to the lack of explicit, high-level structural understanding. To address this limitation, we introduce two complementary components into a Large Multimodal Model-based architecture: target-affinity token, analogous to central vision, that guides the model's attention toward query-relevant objects; and allocentric grid, akin to peripheral vision, that captures global scene context and spatial arrangements. These components work in tandem to enable structured, context-aware understanding of complex 3D environments. Experiments show that CVP achieves state-of-the-art performance across a range of 3D scene understanding benchmarks.