🤖 AI Summary
Weak spatial reasoning in vision-language models (VLMs) and spatial inaccuracies arising from purely linguistic outputs hinder embodied intelligence. To address this, we propose a two-stage framework: (i) bidirectional spatial coordinate alignment, enabling precise coordinate-level mapping between visual and linguistic representations; and (ii) a chain-of-thought spatial grounding mechanism, explicitly anchoring sequential reasoning steps to physical space. Our work is the first to deeply integrate coordinate alignment with chain-of-thought reasoning, establishing a unified vision–language–action joint representation and an end-to-end embodied planning architecture. Evaluated on both simulated and real-world navigation and manipulation tasks, our method achieves a 23.6% improvement in localization accuracy and a 19.4% increase in task success rate, significantly outperforming state-of-the-art approaches.
📝 Abstract
Spatial reasoning is an essential problem in embodied AI research. Efforts to enhance spatial reasoning abilities through supplementary spatial data and fine-tuning have proven limited and ineffective when addressing complex embodied tasks, largely due to their dependence on language-based outputs. While some approaches have introduced a point-based action space to mitigate this issue, they fall short in managing more intricate tasks within complex environments. This deficiency arises from their failure to fully exploit the inherent thinking and reasoning capabilities that are fundamental strengths of Vision-Language Models (VLMs). To address these limitations, we propose a novel approach named SpatialCoT, specifically designed to bolster the spatial reasoning capabilities of VLMs. Our approach comprises two stages: spatial coordinate bi-directional alignment, which aligns vision-language inputs with spatial coordinates, and chain-of-thought spatial grounding, which harnesses the reasoning capabilities of language models for advanced spatial reasoning. We evaluate SpatialCoT on challenging navigation and manipulation tasks, both in simulation and real-world settings. Experimental results demonstrate that our method significantly outperforms previous state-of-the-art approaches in both tasks.