🤖 AI Summary
In large-scale multi-agent reinforcement learning (MARL), partial agent failures often cause severe system performance degradation. Method: This paper proposes Hierarchical Adversarial Decentralized Mean-Field Control (HAD-MFC) to identify the most disruptive subset of vulnerable agents. The upper level formulates a combinatorial selection problem, while the lower level models adversarial failure impacts; Fenchel–Rockafellar duality decouples the bilevel optimization, transforming the NP-hard combinatorial problem into a tractable MDP with theoretical optimality guarantees. The method integrates mean-field MARL, regularized Bellman operators, and greedy policy search to jointly learn vulnerability patterns and interpretable value functions. Contribution/Results: Experiments demonstrate significant improvements in vulnerability identification accuracy on large-scale MARL benchmarks and rule-based systems, inducing more severe cascading failures—thereby validating both effectiveness and interpretability.
📝 Abstract
Partial agent failure becomes inevitable when systems scale up, making it crucial to identify the subset of agents whose compromise would most severely degrade overall performance. In this paper, we study this Vulnerable Agent Identification (VAI) problem in large-scale multi-agent reinforcement learning (MARL). We frame VAI as a Hierarchical Adversarial Decentralized Mean Field Control (HAD-MFC), where the upper level involves an NP-hard combinatorial task of selecting the most vulnerable agents, and the lower level learns worst-case adversarial policies for these agents using mean-field MARL. The two problems are coupled together, making HAD-MFC difficult to solve. To solve this, we first decouple the hierarchical process by Fenchel-Rockafellar transform, resulting a regularized mean-field Bellman operator for upper level that enables independent learning at each level, thus reducing computational complexity. We then reformulate the upper-level combinatorial problem as a MDP with dense rewards from our regularized mean-field Bellman operator, enabling us to sequentially identify the most vulnerable agents by greedy and RL algorithms. This decomposition provably preserves the optimal solution of the original HAD-MFC. Experiments show our method effectively identifies more vulnerable agents in large-scale MARL and the rule-based system, fooling system into worse failures, and learns a value function that reveals the vulnerability of each agent.