Improving Vision-Language-Action Models via Chain-of-Affordance

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalization and safety of robotic task execution in unseen complex environments, this paper proposes Chain-of-Affordance (CoA) reasoning: a structured, four-stage affordance sequence—object recognition, graspability, spatial placeability, and collision-free motion—that decouples embodied decision-making and precedes action generation in Vision-Language-Action (VLA) models. We introduce the CoA paradigm for the first time, integrating multi-stage prompt engineering, affordance-aware instruction tuning, and end-to-end policy optimization to enable affordance-chain-driven reasoning enhancement. Evaluated on multiple benchmarks, our approach significantly outperforms state-of-the-art methods including OpenVLA and Octo. Notably, it achieves substantial improvements in generalization to unseen object poses, free-space discrimination, and obstacle avoidance in novel environments—demonstrating enhanced robustness, safety, and zero-shot adaptability for real-world robotic deployment.

Technology Category

Application Category

📝 Abstract
Robot foundation models, particularly Vision-Language-Action (VLA) models, have garnered significant attention for their ability to enhance robot policy learning, greatly improving robot generalization and robustness. OpenAI recent model, o1, showcased impressive capabilities in solving complex problems by utilizing extensive reasoning chains. This prompts an important question: can robot models achieve better performance in multi-task, complex environments by reviewing prior observations and then providing task-specific reasoning to guide action prediction? In this paper, we introduce extbf{Chain-of-Affordance (CoA)}, a novel approach to scaling robot models by incorporating reasoning in the format of sequential robot affordances to facilitate task completion. Specifically, we prompt the model to consider the following four types of affordances before taking action: a) object affordance - what object to manipulate and where it is; b) grasp affordance - the specific object part to grasp; c) spatial affordance - the optimal space to place the object; and d) movement affordance - the collision-free path for movement. By integrating this knowledge into the policy model, the robot gains essential context, allowing it to act with increased precision and robustness during inference. Our experiments demonstrate that CoA achieves superior performance than state-of-the-art robot foundation models, such as OpenVLA and Octo. Additionally, CoA shows strong generalization to unseen object poses, identifies free space, and avoids obstacles in novel environments.
Problem

Research questions and friction points this paper is trying to address.

Robotics
Complex Environments
Obstacle Avoidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action Chain Reasoning
Operational Accuracy
Unknown Environment Navigation
🔎 Similar Papers
No similar papers found.
Jinming Li
Jinming Li
Shanghai University
Embodied IntellengienceRobotics
Y
Yichen Zhu
Midea Group
Z
Zhibin Tang
Midea Group
Junjie Wen
Junjie Wen
East China Normal University
Minjie Zhu
Minjie Zhu
East China Normal University
MLLMrobotics
X
Xiaoyu Liu
Shanghai University
C
Chengmeng Li
Shanghai University
R
Ran Cheng
Midea Group
Y
Yaxin Peng
Midea Group
Feifei Feng
Feifei Feng
Midea Group