π€ AI Summary
Quadrupedal robots lack task adaptability and dynamic limb-functional allocation for loco-manipulation in unstructured environments. Method: This paper proposes an adaptive dual-module unified controller: an upper module that dynamically decouples locomotion and manipulation objectives in real time using multimodal instructions (trajectory, contact points, natural language); and a lower module that jointly optimizes gait stability and manipulation precision via hierarchical reinforcement learning (PPO), enabling efficient sim-to-real transfer. Contribution/Results: Our approach enables on-demand, dynamic limb-functional reconfiguration without predefined configurations or task-specific designsβthe first such method for quadrupeds. Evaluated on 12 complex real-world tasks, it achieves a mean success rate of 78.9%, significantly outperforming fixed-configuration and single-function baseline methods.
π Abstract
The ability to flexibly leverage limbs for loco-manipulation is essential for enabling autonomous robots to operate in unstructured environments. Yet, prior work on loco-manipulation is often constrained to specific tasks or predetermined limb configurations. In this work, we present Reinforcement Learning for Interlimb Coordination (ReLIC), an approach that enables versatile loco-manipulation through flexible interlimb coordination. The key to our approach is an adaptive controller that seamlessly bridges the execution of manipulation motions and the generation of stable gaits based on task demands. Through the interplay between two controller modules, ReLIC dynamically assigns each limb for manipulation or locomotion and robustly coordinates them to achieve the task success. Using efficient reinforcement learning in simulation, ReLIC learns to perform stable gaits in accordance with the manipulation goals in the real world. To solve diverse and complex tasks, we further propose to interface the learned controller with different types of task specifications, including target trajectories, contact points, and natural language instructions. Evaluated on 12 real-world tasks that require diverse and complex coordination patterns, ReLIC demonstrates its versatility and robustness by achieving a success rate of 78.9% on average. Videos and code can be found at https://relic-locoman.github.io/.