🤖 AI Summary
Existing Vision-Language-Action (VLA) models rely on hand-crafted, low-level action vocabularies—e.g., end-effector poses—rendering them inadequate for agile Whole-Body Control (WBC) required by humanoid robots.
Method: We introduce the first sim-to-real-ready, closed-loop VLA benchmark explicitly designed for WBC, encompassing 150+ cross-category tasks. We propose LeVERB, a hierarchical implicit instruction-following framework that tightly couples vision-language policies with reinforcement-learning-based whole-body dynamical controllers—eliminating explicit action tokenization and instead learning executable, kinematically grounded “verb” representations via synthetic motion demonstrations.
Contribution/Results: LeVERB achieves 80% zero-shot success on simple navigation and an overall task success rate of 58.5%, outperforming hierarchical VLA baselines by 7.8×. This establishes a foundational benchmark and architecture for scalable, real-world WBC in humanoid robotics.
📝 Abstract
Vision-language-action (VLA) models have demonstrated strong semantic understanding and zero-shot generalization, yet most existing systems assume an accurate low-level controller with hand-crafted action"vocabulary"such as end-effector pose or root velocity. This assumption confines prior work to quasi-static tasks and precludes the agile, whole-body behaviors required by humanoid whole-body control (WBC) tasks. To capture this gap in the literature, we start by introducing the first sim-to-real-ready, vision-language, closed-loop benchmark for humanoid WBC, comprising over 150 tasks from 10 categories. We then propose LeVERB: Latent Vision-Language-Encoded Robot Behavior, a hierarchical latent instruction-following framework for humanoid vision-language WBC, the first of its kind. At the top level, a vision-language policy learns a latent action vocabulary from synthetically rendered kinematic demonstrations; at the low level, a reinforcement-learned WBC policy consumes these latent verbs to generate dynamics-level commands. In our benchmark, LeVERB can zero-shot attain a 80% success rate on simple visual navigation tasks, and 58.5% success rate overall, outperforming naive hierarchical whole-body VLA implementation by 7.8 times.