🤖 AI Summary
To address the limited generalization and safety of robotic task execution in unseen complex environments, this paper proposes Chain-of-Affordance (CoA) reasoning: a structured, four-stage affordance sequence—object recognition, graspability, spatial placeability, and collision-free motion—that decouples embodied decision-making and precedes action generation in Vision-Language-Action (VLA) models. We introduce the CoA paradigm for the first time, integrating multi-stage prompt engineering, affordance-aware instruction tuning, and end-to-end policy optimization to enable affordance-chain-driven reasoning enhancement. Evaluated on multiple benchmarks, our approach significantly outperforms state-of-the-art methods including OpenVLA and Octo. Notably, it achieves substantial improvements in generalization to unseen object poses, free-space discrimination, and obstacle avoidance in novel environments—demonstrating enhanced robustness, safety, and zero-shot adaptability for real-world robotic deployment.
📝 Abstract
Robot foundation models, particularly Vision-Language-Action (VLA) models, have garnered significant attention for their ability to enhance robot policy learning, greatly improving robot generalization and robustness. OpenAI recent model, o1, showcased impressive capabilities in solving complex problems by utilizing extensive reasoning chains. This prompts an important question: can robot models achieve better performance in multi-task, complex environments by reviewing prior observations and then providing task-specific reasoning to guide action prediction? In this paper, we introduce extbf{Chain-of-Affordance (CoA)}, a novel approach to scaling robot models by incorporating reasoning in the format of sequential robot affordances to facilitate task completion. Specifically, we prompt the model to consider the following four types of affordances before taking action: a) object affordance - what object to manipulate and where it is; b) grasp affordance - the specific object part to grasp; c) spatial affordance - the optimal space to place the object; and d) movement affordance - the collision-free path for movement. By integrating this knowledge into the policy model, the robot gains essential context, allowing it to act with increased precision and robustness during inference. Our experiments demonstrate that CoA achieves superior performance than state-of-the-art robot foundation models, such as OpenVLA and Octo. Additionally, CoA shows strong generalization to unseen object poses, identifies free space, and avoids obstacles in novel environments.