🤖 AI Summary
This work addresses the challenge of endowing legged robots with both biologically diverse locomotion and precise user controllability. Methodologically, we propose a bio-inspired control framework grounded in unstructured animal motion data: (1) a variational autoencoder (VAE) models multi-gait motion sequences to learn a stylistically consistent latent representation enabling smooth gait transitions; (2) constrained inverse kinematics coupled with model predictive control (MPC) maps learned animal motions onto dynamically feasible robot trajectories; and (3) a reinforcement learning–based feedback controller ensures accurate velocity tracking and real-time, adaptive gait switching. Evaluated on a physical quadrupedal platform, the framework significantly improves locomotion fidelity, stability, and responsiveness. To our knowledge, this is the first end-to-end system that transforms raw animal motion capture data into high-fidelity, user-adjustable, and real-time responsive biomimetic locomotion control for legged robots.
📝 Abstract
This paper presents a control framework for legged robots that leverages unstructured real-world animal motion data to generate animal-like and user-steerable behaviors. Our framework learns to follow velocity commands while reproducing the diverse gait patterns in the original dataset. To begin with, animal motion data is transformed into a robot-compatible database using constrained inverse kinematics and model predictive control, bridging the morphological and physical gap between the animal and the robot. Subsequently, a variational autoencoder-based motion synthesis module captures the diverse locomotion patterns in the motion database and generates smooth transitions between them in response to velocity commands. The resulting kinematic motions serve as references for a reinforcement learning-based feedback controller deployed on physical robots. We show that this approach enables a quadruped robot to adaptively switch gaits and accurately track user velocity commands while maintaining the stylistic coherence of the motion data. Additionally, we provide component-wise evaluations to analyze the system's behavior in depth and demonstrate the efficacy of our method for more accurate and reliable motion imitation.