iWalker: Imperative Visual Planning for Walking Humanoid Robot

📅 2024-09-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing modular humanoid robot frameworks suffer from fragmented perception-planning-execution pipelines, error accumulation, and limited adaptability. This paper introduces the first end-to-end vision-driven bipedal locomotion system that unifies visual perception, gait planning, and whole-body balance control, enabling joint optimization for obstacle avoidance and dynamic balance. We propose a novel instruction-guided learning (IL)-based two-level optimization framework that decouples high-level gait prediction from low-level balance control, eliminating reliance on modular architectures and labeled datasets while enabling fully self-supervised training. Our method integrates end-to-end vision-action modeling, model predictive control (MPC), and self-supervised reinforcement learning. Extensive evaluations in simulation and on real robotic hardware demonstrate significant improvements in generalization, robustness, and autonomous walking capability under complex, unstructured environments.

Technology Category

Application Category

📝 Abstract
Humanoid robots, designed to operate in human-centric environments, serve as a fundamental platform for a broad range of tasks. Although humanoid robots have been extensively studied for decades, a majority of existing humanoid robots still heavily rely on complex modular frameworks, leading to inflexibility and potential compounded errors from independent sensing, planning, and acting components. In response, we propose an end-to-end humanoid sense-plan-act walking system, enabling vision-based obstacle avoidance and footstep planning for whole body balancing simultaneously. We designed two imperative learning (IL)-based bilevel optimizations for model-predictive step planning and whole body balancing, respectively, to achieve self-supervised learning for humanoid robot walking. This enables the robot to learn from arbitrary unlabeled data, improving its adaptability and generalization capabilities. We refer to our method as iWalker and demonstrate its effectiveness in both simulated and real-world environments, representing a significant advancement toward autonomous humanoid robots.
Problem

Research questions and friction points this paper is trying to address.

Enables vision-based obstacle avoidance and footstep planning
Achieves self-supervised learning for humanoid robot walking
Improves adaptability and generalization in humanoid robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end vision-based walking system
Imperative learning for self-supervised optimization
Bilevel optimization for step and balance planning
🔎 Similar Papers
No similar papers found.
X
Xiaodi Lin
The Spatial AI & Robotics (SAIR) Lab, Computer Science and Engineering, University at Buffalo, NY 14260, USA
Yuhao Huang
Yuhao Huang
Shenzhen University
Medical Image ComputingUltrasoundModel Robustness
Taimeng Fu
Taimeng Fu
University at Buffalo
SLAMNavigationNeuro-Symbolic Learning
X
Xiaobin Xiong
The Wisconsin Expeditious Legged Locomotion (WELL) Lab, Mechanical Engineering, University of Wisconsin-Madison, WI 53706, USA
C
Chen Wang
The Spatial AI & Robotics (SAIR) Lab, Computer Science and Engineering, University at Buffalo, NY 14260, USA