🤖 AI Summary
This study investigates the performance trade-offs between centralized and decentralized multi-agent reinforcement learning (MARL) for controlling soft robotic arms modeled as Cosserat rods. Method: Using a simulation environment integrating PyElastica and OpenAI Gym, we evaluate Proximal Policy Optimization (PPO) and Multi-Agent PPO (MAPPO) under partial observability across varying numbers of control segments (n) and three benchmark tasks: target reaching, recovery from external disturbances, and adaptation to actuator failures. Contribution/Results: For n ≤ 4, centralized policies achieve higher task success rates; for 4 < n ≤ 12, decentralized architectures improve sample efficiency by 23% and significantly enhance robustness and fault tolerance, albeit with a ~40% increase in centralized training time. This work is the first to quantitatively characterize the “segment count–architecture–performance” mapping in MARL-based soft robot control, yielding reproducible design principles and architecture selection guidelines for sim-to-real transfer.
📝 Abstract
This paper presents a quantitative comparison between centralised and distributed multi-agent reinforcement learning (MARL) architectures for controlling a soft robotic arm modelled as a Cosserat rod in simulation. Using PyElastica and the OpenAI Gym interface, we train both a global Proximal Policy Optimisation (PPO) controller and a Multi-Agent PPO (MAPPO) under identical budgets. Both approaches are based on the arm having $n$ number of controlled sections. The study systematically varies $n$ and evaluates the performance of the arm to reach a fixed target in three scenarios: default baseline condition, recovery from external disturbance, and adaptation to actuator failure. Quantitative metrics used for the evaluation are mean action magnitude, mean final distance, mean episode length, and success rate. The results show that there are no significant benefits of the distributed policy when the number of controlled sections $nle4$. In very simple systems, when $nle2$, the centralised policy outperforms the distributed one. When $n$ increases to $4<nle 12$, the distributed policy shows a high sample efficiency. In these systems, distributed policy promotes a stronger success rate, resilience, and robustness under local observability and yields faster convergence given the same sample size. However, centralised policies achieve much higher time efficiency during training as it takes much less time to train the same size of samples. These findings highlight the trade-offs between centralised and distributed policy in reinforcement learning-based control for soft robotic systems and provide actionable design guidance for future sim-to-real transfer in soft rod-like manipulators.