A Quantitative Comparison of Centralised and Distributed Reinforcement Learning-Based Control for Soft Robotic Arms

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the performance trade-offs between centralized and decentralized multi-agent reinforcement learning (MARL) for controlling soft robotic arms modeled as Cosserat rods. Method: Using a simulation environment integrating PyElastica and OpenAI Gym, we evaluate Proximal Policy Optimization (PPO) and Multi-Agent PPO (MAPPO) under partial observability across varying numbers of control segments (n) and three benchmark tasks: target reaching, recovery from external disturbances, and adaptation to actuator failures. Contribution/Results: For n ≤ 4, centralized policies achieve higher task success rates; for 4 < n ≤ 12, decentralized architectures improve sample efficiency by 23% and significantly enhance robustness and fault tolerance, albeit with a ~40% increase in centralized training time. This work is the first to quantitatively characterize the “segment count–architecture–performance” mapping in MARL-based soft robot control, yielding reproducible design principles and architecture selection guidelines for sim-to-real transfer.

Technology Category

Application Category

📝 Abstract
This paper presents a quantitative comparison between centralised and distributed multi-agent reinforcement learning (MARL) architectures for controlling a soft robotic arm modelled as a Cosserat rod in simulation. Using PyElastica and the OpenAI Gym interface, we train both a global Proximal Policy Optimisation (PPO) controller and a Multi-Agent PPO (MAPPO) under identical budgets. Both approaches are based on the arm having $n$ number of controlled sections. The study systematically varies $n$ and evaluates the performance of the arm to reach a fixed target in three scenarios: default baseline condition, recovery from external disturbance, and adaptation to actuator failure. Quantitative metrics used for the evaluation are mean action magnitude, mean final distance, mean episode length, and success rate. The results show that there are no significant benefits of the distributed policy when the number of controlled sections $nle4$. In very simple systems, when $nle2$, the centralised policy outperforms the distributed one. When $n$ increases to $4<nle 12$, the distributed policy shows a high sample efficiency. In these systems, distributed policy promotes a stronger success rate, resilience, and robustness under local observability and yields faster convergence given the same sample size. However, centralised policies achieve much higher time efficiency during training as it takes much less time to train the same size of samples. These findings highlight the trade-offs between centralised and distributed policy in reinforcement learning-based control for soft robotic systems and provide actionable design guidance for future sim-to-real transfer in soft rod-like manipulators.
Problem

Research questions and friction points this paper is trying to address.

Compares centralized versus distributed reinforcement learning for soft robotic arm control
Evaluates performance under varying numbers of controlled sections and scenarios
Analyzes trade-offs between sample efficiency and training time in MARL architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares centralized and distributed reinforcement learning architectures
Uses PPO and MAPPO algorithms for soft robotic arm control
Evaluates performance under varying controlled sections and scenarios
🔎 Similar Papers
No similar papers found.
L
Linxin Hou
Department of Electrical and Computer Engineering, National University of Singapore
Qirui Wu
Qirui Wu
Simon Fraser University
Z
Zhihang Qin
Department of Mechanical Engineering, National University of Singapore
N
Neil Banerjee
Department of Electrical and Computer Engineering, National University of Singapore
Y
Yongxin Guo
Department of Electrical and Computer Engineering, National University of Singapore
Cecilia Laschi
Cecilia Laschi
Professor, National University of Singapore
RoboticsSoft RoboticsBiorobotics