🤖 AI Summary
This work addresses the challenge of coordinating perception and motion planning for legged robots in complex dynamic environments by proposing a novel framework that integrates visual scene understanding, deep reinforcement learning, and infinite-horizon model predictive control (MPC). During training, an implicit model maps proprioceptive signals and depth images into compact locomotion dynamics features. At deployment, the learned policy is embedded within the MPC formulation, enabling control that combines adaptability with structured planning. This approach uniquely unifies vision-driven reinforcement learning with infinite-horizon MPC, demonstrating successful execution of diverse tasks—including traversing slopes, stairs, and performing jumps—across quadrupedal, bipedal, and wheeled-legged hybrid morphologies in simulation. The method significantly enhances both robustness and interpretability compared to existing approaches.
📝 Abstract
Perceptive locomotion for legged robots requires anticipating and adapting to complex, dynamic environments. Model Predictive Control (MPC) serves as a strong baseline, providing interpretable motion planning with constraint enforcement, but struggles with high-dimensional perceptual inputs and rapidly changing terrain. In contrast, model-free Reinforcement Learning (RL) adapts well across visually challenging scenarios but lacks planning. To bridge this gap, we propose VIP-Loco, a framework that integrates vision-based scene understanding with RL and planning. During training, an internal model maps proprioceptive states and depth images into compact kinodynamic features used by the RL policy. At deployment, the learned models are used within an infinite-horizon MPC formulation, combining adaptability with structured planning. We validate VIP-Loco in simulation on challenging locomotion tasks, including slopes, stairs, crawling, tilting, gap jumping, and climbing, across three robot morphologies: a quadruped (Unitree Go1), a biped (Cassie), and a wheeled-biped (TronA1-W). Through ablations and comparisons with state-of-the-art methods, we show that VIP-Loco unifies planning and perception, enabling robust, interpretable locomotion in diverse environments.