🤖 AI Summary
Collaborative multi-agent reinforcement learning (MARL) exhibits insufficient robustness and resilience during simulation-to-reality transfer. Method: We conduct over 80,000 experiments across four real-world robotic environments to systematically evaluate policy stability and disturbance rejection under multiple uncertainties—including action and observation noise—and perform extensive ablation studies. Contribution/Results: We reveal a strong nonlinear trade-off among cooperative performance, robustness, and resilience, and demonstrate that robustness lacks generalization across distinct perturbation types. Crucially, we discover that simple hyperparameter tuning—without architectural or algorithmic modifications—significantly enhances cooperation, robustness, and resilience across mainstream MARL algorithms (e.g., QMIX, MAPPO) and diverse robustification methods. This improvement is cross-algorithmic and cross-method generalizable, offering a lightweight, broadly applicable pathway to enhance the trustworthiness of MARL systems in real-world deployment.
📝 Abstract
In cooperative Multi-Agent Reinforcement Learning (MARL), it is a common practice to tune hyperparameters in ideal simulated environments to maximize cooperative performance. However, policies tuned for cooperation often fail to maintain robustness and resilience under real-world uncertainties. Building trustworthy MARL systems requires a deep understanding of robustness, which ensures stability under uncertainties, and resilience, the ability to recover from disruptions--a concept extensively studied in control systems but largely overlooked in MARL. In this paper, we present a large-scale empirical study comprising over 82,620 experiments to evaluate cooperation, robustness, and resilience in MARL across 4 real-world environments, 13 uncertainty types, and 15 hyperparameters. Our key findings are: (1) Under mild uncertainty, optimizing cooperation improves robustness and resilience, but this link weakens as perturbations intensify. Robustness and resilience also varies by algorithm and uncertainty type. (2) Robustness and resilience do not generalize across uncertainty modalities or agent scopes: policies robust to action noise for all agents may fail under observation noise on a single agent. (3) Hyperparameter tuning is critical for trustworthy MARL: surprisingly, standard practices like parameter sharing, GAE, and PopArt can hurt robustness, while early stopping, high critic learning rates, and Leaky ReLU consistently help. By optimizing hyperparameters only, we observe substantial improvement in cooperation, robustness and resilience across all MARL backbones, with the phenomenon also generalizing to robust MARL methods across these backbones. Code and results available at https://github.com/BUAA-TrustworthyMARL/adv_marl_benchmark .