🤖 AI Summary
Controlling autonomous, repeated morphological reconfiguration of reconfigurable soft robots throughout their operational lifetime remains a fundamental challenge. Method: We propose the first modeling framework that unifies morphological reconfiguration, locomotion control, and environmental interaction within a high-dimensional reinforcement learning (RL) action space; formally formulate the problem as an RL task; and introduce a “coarse-to-fine” curriculum learning strategy to enhance training stability and generalization. Contribution/Results: We develop DittoGym—the first open-source benchmark enabling fine-grained evaluation of morphological evolution. Leveraging deep RL and deformable-body simulation, our learned policies achieve multiple autonomous morphological transitions within a single task sequence. On DittoGym, they realize fine-grained, closed-loop morpho-behavioral coordination, significantly outperforming prior approaches in both morphological adaptability and task performance.
📝 Abstract
Robot co-design, where the morphology of a robot is optimized jointly with a learned policy to solve a specific task, is an emerging area of research. It holds particular promise for soft robots, which are amenable to novel manufacturing techniques that can realize learned morphologies and actuators. Inspired by nature and recent novel robot designs, we propose to go a step further and explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime. We formalize control of reconfigurable soft robots as a high-dimensional reinforcement learning (RL) problem. We unify morphology change, locomotion, and environment interaction in the same action space, and introduce an appropriate, coarse-to-fine curriculum that enables us to discover policies that accomplish fine-grained control of the resulting robots. We also introduce DittoGym, a comprehensive RL benchmark for reconfigurable soft robots that require fine-grained morphology changes to accomplish the tasks. Finally, we evaluate our proposed coarse-to-fine algorithm on DittoGym and demonstrate robots that learn to change their morphology several times within a sequence, uniquely enabled by our RL algorithm. More results are available at https://suninghuang19.github.io/dittogym_page/.