🤖 AI Summary
Quadrupedal locomotion over unstructured terrain faces challenges including difficulty in modeling nonlinear dynamics, strong reliance on accurate state estimation, and high computational overhead of online nonlinear model predictive control (NMPC). This paper proposes an end-to-end multi-task policy learning framework that bypasses explicit state estimation and real-time NMPC optimization, directly mapping proprioceptive sensory inputs to multimodal joint-level control commands. Its core innovation lies in the first-ever distillation of NMPC-generated expert demonstrations into a unified multi-task neural network, enabling smooth gait transitions and lightweight deployment. Evaluated on both simulation and real-world Go1 platforms, the method achieves R² > 0.95 for all joint target predictions, reduces control latency by 83%, and significantly improves cross-terrain generalization. The approach provides an efficient, robust, end-to-end solution for high-dynamic quadrupedal locomotion.
📝 Abstract
Quadruped robots excel in traversing complex, unstructured environments where wheeled robots often fail. However, enabling efficient and adaptable locomotion remains challenging due to the quadrupeds' nonlinear dynamics, high degrees of freedom, and the computational demands of real-time control. Optimization-based controllers, such as Nonlinear Model Predictive Control (NMPC), have shown strong performance, but their reliance on accurate state estimation and high computational overhead makes deployment in real-world settings challenging. In this work, we present a Multi-Task Learning (MTL) framework in which expert NMPC demonstrations are used to train a single neural network to predict actions for multiple locomotion behaviors directly from raw proprioceptive sensor inputs. We evaluate our approach extensively on the quadruped robot Go1, both in simulation and on real hardware, demonstrating that it accurately reproduces expert behavior, allows smooth gait switching, and simplifies the control pipeline for real-time deployment. Our MTL architecture enables learning diverse gaits within a unified policy, achieving high $R^{2}$ scores for predicted joint targets across all tasks.