Learning a Vision-Based Footstep Planner for Hierarchical Walking Control

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing bipedal locomotion on unstructured terrain suffers from fragility and poor real-time performance due to reliance on proprioceptive feedback or hand-crafted vision pipelines. To address this, we propose a vision-driven hierarchical control framework: a high-level gait planner based on reinforcement learning operates on local elevation maps for terrain-adaptive decision-making; a low-level controller employs a reduced-order linear inverted pendulum model with angular momentum dynamics to construct a low-dimensional state representation, coupled with operational-space control to generate stable foot trajectories. The framework establishes an end-to-end visual perception–planning–control loop. Evaluated on the Cassie robot in both simulation and hardware experiments, it achieves robust locomotion over challenging terrains—including slopes, gravel, and stairs—with enhanced stability and real-time capability (planning frequency >30 Hz), demonstrating effectiveness and generalization across diverse environments.

Technology Category

Application Category

📝 Abstract
Bipedal robots demonstrate potential in navigating challenging terrains through dynamic ground contact. However, current frameworks often depend solely on proprioception or use manually designed visual pipelines, which are fragile in real-world settings and complicate real-time footstep planning in unstructured environments. To address this problem, we present a vision-based hierarchical control framework that integrates a reinforcement learning high-level footstep planner, which generates footstep commands based on a local elevation map, with a low-level Operational Space Controller that tracks the generated trajectories. We utilize the Angular Momentum Linear Inverted Pendulum model to construct a low-dimensional state representation to capture an informative encoding of the dynamics while reducing complexity. We evaluate our method across different terrain conditions using the underactuated bipedal robot Cassie and investigate the capabilities and challenges of our approach through simulation and hardware experiments.
Problem

Research questions and friction points this paper is trying to address.

Develop vision-based footstep planner for bipedal robots
Overcome fragility of manual visual pipelines in real-world
Enable real-time footstep planning in unstructured environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-based hierarchical control framework
Reinforcement learning footstep planner
Angular Momentum Linear Inverted Pendulum model
🔎 Similar Papers
No similar papers found.
M
Minku Kim
General Robotics, Automation, Sensing and Perception (GRASP) Laboratory, University of Pennsylvania, Philadelphia, PA, 19104, USA
B
Brian Acosta
General Robotics, Automation, Sensing and Perception (GRASP) Laboratory, University of Pennsylvania, Philadelphia, PA, 19104, USA
Pratik Chaudhari
Pratik Chaudhari
University of Pennsylvania
Deep LearningMachine LearningRobotics
Michael Posa
Michael Posa
Associate Professor, University of Pennsylvania
RoboticsControlOptimizationContact dynamicsMachine Learning