🤖 AI Summary
To address the challenge of enabling general-purpose robots in open-world settings to comprehend complex, multi-step, multi-constraint instructions and dynamically adapt to real-time verbal feedback (e.g., “That’s not trash”), this paper proposes a hierarchical Vision-Language-Action (VLA) model. The architecture uniquely decouples high-level intent reasoning from low-level action control, integrating joint encoding of instructions and feedback, embodied action policy distillation, and a cross-platform deployment framework to support continual, context-aware instruction following on physical robots. Experiments validate the approach on single-arm, dual-arm, and mobile dual-arm platforms, successfully executing tasks including desk cleaning, vegetarian sandwich assembly, and grocery sorting. Results demonstrate substantial improvements in task generalization and interactive robustness over end-to-end baselines, confirming the efficacy of hierarchical intent–action decomposition for open-world robotic autonomy.
📝 Abstract
Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution. Intricate instructions (e.g.,"Could you make me a vegetarian sandwich?"or"I don't like that one") require not just the ability to physically perform the individual steps, but the ability to situate complex commands and feedback in the physical world. In this work, we describe a system that uses vision-language models in a hierarchical structure, first reasoning over complex prompts and user feedback to deduce the most appropriate next step to fulfill the task, and then performing that step with low-level actions. In contrast to direct instruction following methods that can fulfill simple commands ("pick up the cup"), our system can reason through complex prompts and incorporate situated feedback during task execution ("that's not trash"). We evaluate our system across three robotic platforms, including single-arm, dual-arm, and dual-arm mobile robots, demonstrating its ability to handle tasks such as cleaning messy tables, making sandwiches, and grocery shopping.