🤖 AI Summary
Developing multimodal locomotion policies (e.g., walking, running, jumping) for mobile robots remains challenging due to high development barriers and poor sim-to-real transfer. Method: This paper introduces an open-source reinforcement learning framework built on Unity ML-Agents, integrating URDF/SDF model parsing, Proximal Policy Optimization (PPO), and cross-platform simulation-to-robot deployment interfaces. It enables one-click robot model import, full-configuration generalization, multimodal coordinated training, and morphology evolution driven by extreme performance objectives. Contributions/Results: First, it establishes an end-to-end automated pipeline—modeling, training, and deployment. Second, it proposes a motion-modality-decoupled collaborative learning mechanism. Third, it introduces a closed-loop evolutionary verification paradigm for morphology optimization. Experiments demonstrate efficient acquisition of robust locomotion policies in simulation across diverse legged and wheeled robots, with successful real-world deployment—significantly reducing development and iteration costs for locomotion control strategies.
📝 Abstract
This paper introduces Unity RL Playground, an open-source reinforcement learning framework built on top of Unity ML-Agents. Unity RL Playground automates the process of training mobile robots to perform various locomotion tasks such as walking, running, and jumping in simulation, with the potential for seamless transfer to real hardware. Key features include one-click training for imported robot models, universal compatibility with diverse robot configurations, multi-mode motion learning capabilities, and extreme performance testing to aid in robot design optimization and morphological evolution. The attached video can be found at https://linqi-ye.github.io/video/iros25.mp4 and the code is coming soon.