GraphCoT-VLA: A 3D Spatial-Aware Reasoning Vision-Language-Action Model for Robotic Manipulation with Ambiguous Instructions

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language-action (VLAM) models struggle with ambiguous language instructions, exhibit poor generalization to unseen environments, and are fundamentally limited by 2D static perception—lacking explicit modeling of 3D interactive dynamics. To address these limitations, we propose an end-to-end VLAM framework featuring two core innovations: (1) a structured chain-of-thought reasoning module that jointly integrates high-level task planning, failure-driven feedback, and low-level action imagination; and (2) a real-time updatable 3D Pose-Object graph that explicitly encodes topological spatial relationships between the robot and objects in 3D space. Our method synergistically combines vision-language pretraining, graph neural networks, and a hybrid dropout-based inference strategy. Evaluated on real-robot manipulation tasks, the model achieves significant improvements in task success rate and response latency, demonstrating strong generalization across diverse instructions, robust adaptation to open-world environments, and resilience to perceptual and environmental disturbances.

Technology Category

Application Category

📝 Abstract
Vision-language-action models have emerged as a crucial paradigm in robotic manipulation. However, existing VLA models exhibit notable limitations in handling ambiguous language instructions and unknown environmental states. Furthermore, their perception is largely constrained to static two-dimensional observations, lacking the capability to model three-dimensional interactions between the robot and its environment. To address these challenges, this paper proposes GraphCoT-VLA, an efficient end-to-end model. To enhance the model's ability to interpret ambiguous instructions and improve task planning, we design a structured Chain-of-Thought reasoning module that integrates high-level task understanding and planning, failed task feedback, and low-level imaginative reasoning about future object positions and robot actions. Additionally, we construct a real-time updatable 3D Pose-Object graph, which captures the spatial configuration of robot joints and the topological relationships between objects in 3D space, enabling the model to better understand and manipulate their interactions. We further integrates a dropout hybrid reasoning strategy to achieve efficient control outputs. Experimental results across multiple real-world robotic tasks demonstrate that GraphCoT-VLA significantly outperforms existing methods in terms of task success rate and response speed, exhibiting strong generalization and robustness in open environments and under uncertain instructions.
Problem

Research questions and friction points this paper is trying to address.

Handling ambiguous language instructions in robotic manipulation
Modeling 3D interactions in dynamic environments
Improving task planning with spatial-aware reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured Chain-of-Thought reasoning module
Real-time updatable 3D Pose-Object graph
Dropout hybrid reasoning strategy
🔎 Similar Papers
No similar papers found.
H
Helong Huang
Noah’s Ark Lab, Huawei
Min Cen
Min Cen
University of Science and Technology of China
K
Kai Tan
Noah’s Ark Lab, Huawei
X
Xingyue Quan
Noah’s Ark Lab, Huawei
G
Guowei Huang
Noah’s Ark Lab, Huawei
H
Hong Zhang
School of Management, University of Science and Technology of China