🤖 AI Summary
Current vision-action policies lack explicit spatial modeling capabilities, hindering reliable translation of visual plans into executable control in complex embodied environments. To address this, we propose Spatial Policy (SP), the first spatially aware modeling framework that jointly learns visual prediction and action execution through three key innovations: (1) spatially conditioned video generation for visual forecasting, (2) a spatial-layout-aware action prediction network, and (3) a two-stage feedback-driven replanning mechanism enabling co-optimization of vision-action learning and spatial logical reasoning. SP introduces a novel *spatial plan table*—a unified representation encoding both visual intent and action constraints—thereby significantly improving spatial relational understanding and online error correction. Evaluated on 11 challenging embodied tasks, SP achieves a mean success rate of 86.7%, outperforming the strongest baseline by 33.0 percentage points, and substantially enhancing practical utility and robustness in real-world scenarios.
📝 Abstract
Vision-centric hierarchical embodied models have demonstrated strong potential for long-horizon robotic control. However, existing methods lack spatial awareness capabilities, limiting their effectiveness in bridging visual plans to actionable control in complex environments. To address this problem, we propose Spatial Policy (SP), a unified spatial-aware visuomotor robotic manipulation framework via explicit spatial modeling and reasoning. Specifically, we first design a spatial-conditioned embodied video generation module to model spatially guided predictions through a spatial plan table. Then, we propose a spatial-based action prediction module to infer executable actions with coordination. Finally, we propose a spatial reasoning feedback policy to refine the spatial plan table via dual-stage replanning. Extensive experiments show that SP significantly outperforms state-of-the-art baselines, achieving a 33.0% average improvement over the best baseline. With an 86.7% average success rate across 11 diverse tasks, SP substantially enhances the practicality of embodied models for robotic control applications. Code and checkpoints are maintained at https://plantpotatoonmoon.github.io/SpatialPolicy/.