LeVERB: Humanoid Whole-Body Control with Latent Vision-Language Instruction

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Vision-Language-Action (VLA) models rely on hand-crafted, low-level action vocabularies—e.g., end-effector poses—rendering them inadequate for agile Whole-Body Control (WBC) required by humanoid robots. Method: We introduce the first sim-to-real-ready, closed-loop VLA benchmark explicitly designed for WBC, encompassing 150+ cross-category tasks. We propose LeVERB, a hierarchical implicit instruction-following framework that tightly couples vision-language policies with reinforcement-learning-based whole-body dynamical controllers—eliminating explicit action tokenization and instead learning executable, kinematically grounded “verb” representations via synthetic motion demonstrations. Contribution/Results: LeVERB achieves 80% zero-shot success on simple navigation and an overall task success rate of 58.5%, outperforming hierarchical VLA baselines by 7.8×. This establishes a foundational benchmark and architecture for scalable, real-world WBC in humanoid robotics.

Technology Category

Application Category

📝 Abstract
Vision-language-action (VLA) models have demonstrated strong semantic understanding and zero-shot generalization, yet most existing systems assume an accurate low-level controller with hand-crafted action"vocabulary"such as end-effector pose or root velocity. This assumption confines prior work to quasi-static tasks and precludes the agile, whole-body behaviors required by humanoid whole-body control (WBC) tasks. To capture this gap in the literature, we start by introducing the first sim-to-real-ready, vision-language, closed-loop benchmark for humanoid WBC, comprising over 150 tasks from 10 categories. We then propose LeVERB: Latent Vision-Language-Encoded Robot Behavior, a hierarchical latent instruction-following framework for humanoid vision-language WBC, the first of its kind. At the top level, a vision-language policy learns a latent action vocabulary from synthetically rendered kinematic demonstrations; at the low level, a reinforcement-learned WBC policy consumes these latent verbs to generate dynamics-level commands. In our benchmark, LeVERB can zero-shot attain a 80% success rate on simple visual navigation tasks, and 58.5% success rate overall, outperforming naive hierarchical whole-body VLA implementation by 7.8 times.
Problem

Research questions and friction points this paper is trying to address.

Bridging vision-language models with agile humanoid whole-body control
Creating a sim-to-real benchmark for vision-language humanoid tasks
Developing hierarchical latent action framework for dynamic control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical latent instruction-following framework
Vision-language policy learns latent actions
Reinforcement-learned WBC generates dynamics commands
🔎 Similar Papers
No similar papers found.
Haoru Xue
Haoru Xue
PhD in AI Robotics, UC Berkeley
robot learningVLAhumanoid
X
Xiaoyu Huang
University of California Berkeley
Dantong Niu
Dantong Niu
University of California, Berkeley
Vision Language ModelRobotics
Qiayuan Liao
Qiayuan Liao
University of California, Berkeley
Legged Robots
T
Thomas Kragerud
Norwegian University of Science and Technology
J
J. Gravdahl
Norwegian University of Science and Technology
X
Xue Bin Peng
Simon Fraser University
Guanya Shi
Guanya Shi
Assistant Professor, CMU RI | Amazon Scholar, FAR (Frontier AI & Robotics)
RoboticsRobot LearningReinforcement LearningControlHumanoid
Trevor Darrell
Trevor Darrell
Professor of Computer Science, U.C. Berkeley
Computer VisionArtificial IntelligenceAIMachine LearningDeep Learning
K
Koushil Screenath
University of California Berkeley
S
S. Sastry
University of California Berkeley