🤖 AI Summary
This work addresses robust training of graph-structured multi-agent systems under distributed, heterogeneous spatial adversarial perturbations. To overcome the limitations of conventional adversarial training—namely its centralized and homogeneous assumptions—we propose the first decentralized adversarial training framework for graph networks. Our approach formulates a distributed min-max optimization problem, integrating both diffusion-based and consensus-based strategies to explicitly model spatially heterogeneous attacks. We provide rigorous theoretical convergence guarantees for the algorithm under strongly convex, convex, and non-convex objective settings. Empirical evaluations demonstrate that the proposed method significantly enhances the robustness of multi-agent models against diverse graph-aware adversarial attacks—including topology- and feature-based perturbations—while validating the effectiveness and generalizability of collaborative defense mechanisms in decentralized learning environments.
📝 Abstract
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years. Most existing studies focus on the behavior of stand-alone single-agent learners. In comparison, this work studies adversarial training over graphs, where individual agents are subjected to perturbations of varied strength levels across space. It is expected that interactions by linked agents, and the heterogeneity of the attack models that are possible over the graph, can help enhance robustness in view of the coordination power of the group. Using a min-max formulation of distributed learning, we develop a decentralized adversarial training framework for multi-agent systems. Specifically, we devise two decentralized adversarial training algorithms by relying on two popular decentralized learning strategies--diffusion and consensus. We analyze the convergence properties of the proposed framework for strongly-convex, convex, and non-convex environments, and illustrate the enhanced robustness to adversarial attacks.