🤖 AI Summary
This work questions whether existing cooperative multi-agent reinforcement learning benchmarks genuinely necessitate reasoning under decentralized partially observable Markov decision processes (Dec-POMDPs). To address this, the authors introduce the first diagnostic toolkit combining statistical hypothesis testing with information-theoretic probing to systematically evaluate the reliance of state-of-the-art algorithms on historical information across 37 environments spanning MPE, SMAX, Overcooked, Hanabi, and MaBrax. By comparing the performance of reactive versus memory-based policies and analyzing behavioral complexity across tasks, they find that in more than half of the environments, simple strategies match or outperform sophisticated methods. Furthermore, observed coordination often stems from fragile synchronization rather than robust temporal reasoning, suggesting that current benchmarks fail to adequately elicit core Dec-POMDP capabilities and casting doubt on their validity as meaningful testbeds for decentralized multi-agent learning.
📝 Abstract
Cooperative multi-agent reinforcement learning (MARL) is typically framed as a decentralised partially observable Markov decision process (Dec-POMDP), a setting whose hardness stems from two key challenges: partial observability and decentralised coordination. Genuinely solving such tasks requires Dec-POMDP reasoning, where agents use history to infer hidden states and coordinate based on local information. Yet it remains unclear whether popular benchmarks actually demand this reasoning or permit success via simpler strategies. We introduce a diagnostic suite combining statistically grounded performance comparisons and information-theoretic probes to audit the behavioural complexity of baseline policies (IPPO and MAPPO) across 37 scenarios spanning MPE, SMAX, Overcooked, Hanabi, and MaBrax. Our diagnostics reveal that success on these benchmarks rarely requires genuine Dec-POMDP reasoning. Reactive policies match the performance of memory-based agents in over half the scenarios, and emergent coordination frequently relies on brittle, synchronous action coupling rather than robust temporal influence. These findings suggest that some widely used benchmarks may not adequately test core Dec-POMDP assumptions under current training paradigms, potentially leading to over-optimistic assessments of progress. We release our diagnostic tooling to support more rigorous environment design and evaluation in cooperative MARL.