E-SDS: Environment-aware See it, Do it, Sorted - Automated Environment-Aware Reinforcement Learning for Humanoid Locomotion

πŸ“… 2025-12-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the poor generalizability of reward functions and lack of environmental awareness in humanoid robot locomotion control on complex terrain, this paper proposes the first environment-aware automatic reward generation framework integrating vision-language models (VLMs) with real-time terrain sensing. The method enables end-to-end reward synthesis via multimodal reward modeling, VLM-driven joint semantic-geometric understanding of terrain, and demonstration video-guided reinforcement learning. It achieves fully autonomous stair descent for the Unitree G1 humanoid robotβ€”the first such demonstration. Evaluated across four complex terrain types, the approach reduces velocity tracking error by 51.9%–82.6%, shortens reward design time from days to under two hours, and significantly improves cross-terrain transferability and deployment efficiency.

Technology Category

Application Category

πŸ“ Abstract
Vision-language models (VLMs) show promise in automating reward design in humanoid locomotion, which could eliminate the need for tedious manual engineering. However, current VLM-based methods are essentially "blind", as they lack the environmental perception required to navigate complex terrain. We present E-SDS (Environment-aware See it, Do it, Sorted), a framework that closes this perception gap. E-SDS integrates VLMs with real-time terrain sensor analysis to automatically generate reward functions that facilitate training of robust perceptive locomotion policies, grounded by example videos. Evaluated on a Unitree G1 humanoid across four distinct terrains (simple, gaps, obstacles, stairs), E-SDS uniquely enabled successful stair descent, while policies trained with manually-designed rewards or a non-perceptive automated baseline were unable to complete the task. In all terrains, E-SDS also reduced velocity tracking error by 51.9-82.6%. Our framework reduces the human effort of reward design from days to less than two hours while simultaneously producing more robust and capable locomotion policies.
Problem

Research questions and friction points this paper is trying to address.

Automates reward design for humanoid locomotion using vision-language models
Enhances environmental perception for navigating complex terrains in robotics
Reduces manual engineering effort while improving policy robustness and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates vision-language models with terrain sensor analysis
Automatically generates reward functions from example videos
Enables robust perceptive locomotion across complex terrains
πŸ”Ž Similar Papers
No similar papers found.
E
Enis Yalcin
University College London, Department of Computer Science, London, UK
J
Joshua O'Hara
University College London, Department of Computer Science, London, UK
M
Maria Stamatopoulou
University College London, Department of Computer Science, London, UK
Chengxu Zhou
Chengxu Zhou
Associate Professor in Robotics & AI, University College London
Legged ManipulationWhole Body ControlHumanoid RobotTelexistenceEmbodied AI
Dimitrios Kanoulas
Dimitrios Kanoulas
Professor in Robotics and AI, UKRI FLF, University College London (UCL), Archimedes/Athena RC
Robot CognitionRobot LearningLegged Robotics