π€ AI Summary
To address the poor generalizability of reward functions and lack of environmental awareness in humanoid robot locomotion control on complex terrain, this paper proposes the first environment-aware automatic reward generation framework integrating vision-language models (VLMs) with real-time terrain sensing. The method enables end-to-end reward synthesis via multimodal reward modeling, VLM-driven joint semantic-geometric understanding of terrain, and demonstration video-guided reinforcement learning. It achieves fully autonomous stair descent for the Unitree G1 humanoid robotβthe first such demonstration. Evaluated across four complex terrain types, the approach reduces velocity tracking error by 51.9%β82.6%, shortens reward design time from days to under two hours, and significantly improves cross-terrain transferability and deployment efficiency.
π Abstract
Vision-language models (VLMs) show promise in automating reward design in humanoid locomotion, which could eliminate the need for tedious manual engineering. However, current VLM-based methods are essentially "blind", as they lack the environmental perception required to navigate complex terrain. We present E-SDS (Environment-aware See it, Do it, Sorted), a framework that closes this perception gap. E-SDS integrates VLMs with real-time terrain sensor analysis to automatically generate reward functions that facilitate training of robust perceptive locomotion policies, grounded by example videos. Evaluated on a Unitree G1 humanoid across four distinct terrains (simple, gaps, obstacles, stairs), E-SDS uniquely enabled successful stair descent, while policies trained with manually-designed rewards or a non-perceptive automated baseline were unable to complete the task. In all terrains, E-SDS also reduced velocity tracking error by 51.9-82.6%. Our framework reduces the human effort of reward design from days to less than two hours while simultaneously producing more robust and capable locomotion policies.