🤖 AI Summary
Conventional hardware design and control algorithm development for humanoid robots are typically sequential, leading to suboptimal exploitation of hardware potential—particularly for highly dynamic tasks such as pull-ups.
Method: This paper introduces EA-CoRL, a co-design framework that jointly optimizes hardware morphology and control policy in parallel. It employs evolutionary algorithms to explore the hardware design space while integrating a continuous policy adaptation mechanism that updates reinforcement learning policies in real time during hardware evolution.
Contribution/Results: Evaluated on the RH5 humanoid platform, EA-CoRL achieves a 23.6% improvement in task fitness over baseline methods and expands design space coverage by 2.1×. The framework significantly enhances co-optimization convergence speed and performance upper bounds, establishing a scalable, end-to-end paradigm for advancing dynamic capabilities in humanoid robotics.
📝 Abstract
Humanoid robots have seen significant advancements in both design and control, with a growing emphasis on integrating these aspects to enhance overall performance. Traditionally, robot design has followed a sequential process, where control algorithms are developed after the hardware is finalized. However, this can be myopic and prevent robots to fully exploit their hardware capabilities. Recent approaches advocate for co-design, optimizing both design and control in parallel to maximize robotic capabilities. This paper presents the Evolutionary Continuous Adaptive RL-based Co-Design (EA-CoRL) framework, which combines reinforcement learning (RL) with evolutionary strategies to enable continuous adaptation of the control policy to the hardware. EA-CoRL comprises two key components: Design Evolution, which explores the hardware choices using an evolutionary algorithm to identify efficient configurations, and Policy Continuous Adaptation, which fine-tunes a task-specific control policy across evolving designs to maximize performance rewards. We evaluate EA-CoRL by co-designing the actuators (gear ratios) and control policy of the RH5 humanoid for a highly dynamic chin-up task, previously unfeasible due to actuator limitations. Comparative results against state-of-the-art RL-based co-design methods show that EA-CoRL achieves higher fitness score and broader design space exploration, highlighting the critical role of continuous policy adaptation in robot co-design.