Mechanical Intelligence-Aware Curriculum Reinforcement Learning for Humanoids with Parallel Actuation

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inherent inaccuracies in motion modeling and suboptimal control policies arise from the absence of mechanical intelligence in parallel actuation mechanisms of humanoid robots. Method: This paper proposes an end-to-end curriculum reinforcement learning framework, the first to fully preserve differential pulley systems and closed-chain constraints (five-bar and four-bar linkages) within GPU-accelerated simulation (using MJX), thereby overcoming limitations of conventional simplified dynamics models. The approach integrates native closed-chain dynamics modeling, parallel computation optimization, and curriculum-based policy training to achieve deep co-design of mechanical structure and control strategy. Contribution/Results: Evaluated on the BRUCE humanoid platform, the method enables zero-shot transfer to physical hardware and significantly outperforms model predictive control in adaptability to complex terrain and motion stability. It establishes a scalable paradigm for autonomous motion control of high-DOF parallel mechanisms in embodied intelligence systems.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has enabled significant advances in humanoid robot locomotion, yet most learning frameworks do not account for mechanical intelligence embedded in parallel actuation mechanisms due to limitations in simulator support for closed kinematic chains. This omission can lead to inaccurate motion modeling and suboptimal policies, particularly for robots with high actuation complexity. This paper presents an end-to-end curriculum RL framework for BRUCE, a kid-sized humanoid robot featuring three distinct parallel mechanisms in its legs: a differential pulley, a 5-bar linkage, and a 4-bar linkage. Unlike prior approaches that rely on simplified serial approximations, we simulate all closed-chain constraints natively using GPU-accelerated MJX (MuJoCo), preserving the hardware's physical properties during training. We benchmark our RL approach against a Model Predictive Controller (MPC), demonstrating better surface generalization and performance in real-world zero-shot deployment. This work highlights the computational approaches and performance benefits of fully simulating parallel mechanisms in end-to-end learning pipelines for legged humanoids.
Problem

Research questions and friction points this paper is trying to address.

Modeling humanoid locomotion with parallel actuation mechanisms accurately
Overcoming simulator limitations for closed kinematic chains in RL
Improving real-world performance via native simulation of parallel mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated MJX simulates closed-chain constraints
Curriculum RL preserves parallel mechanisms' properties
End-to-end learning outperforms MPC in deployment
🔎 Similar Papers
2024-05-28International Conference on Learning RepresentationsCitations: 10
Y
Yusuke Tanaka
Department of Mechanical and Aerospace Engineering, UCLA, Los Angeles, CA, USA
Alvin Zhu
Alvin Zhu
University of California Los Angeles
roboticsdeep learningreinforcement learning
Quanyou Wang
Quanyou Wang
PhD Student at UCLA
Robotics
D
Dennis Hong
Department of Mechanical and Aerospace Engineering, UCLA, Los Angeles, CA, USA