Decentralized Adversarial Training over Graphs

📅 2023-03-23
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses robust training of graph-structured multi-agent systems under distributed, heterogeneous spatial adversarial perturbations. To overcome the limitations of conventional adversarial training—namely its centralized and homogeneous assumptions—we propose the first decentralized adversarial training framework for graph networks. Our approach formulates a distributed min-max optimization problem, integrating both diffusion-based and consensus-based strategies to explicitly model spatially heterogeneous attacks. We provide rigorous theoretical convergence guarantees for the algorithm under strongly convex, convex, and non-convex objective settings. Empirical evaluations demonstrate that the proposed method significantly enhances the robustness of multi-agent models against diverse graph-aware adversarial attacks—including topology- and feature-based perturbations—while validating the effectiveness and generalizability of collaborative defense mechanisms in decentralized learning environments.
📝 Abstract
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years. Most existing studies focus on the behavior of stand-alone single-agent learners. In comparison, this work studies adversarial training over graphs, where individual agents are subjected to perturbations of varied strength levels across space. It is expected that interactions by linked agents, and the heterogeneity of the attack models that are possible over the graph, can help enhance robustness in view of the coordination power of the group. Using a min-max formulation of distributed learning, we develop a decentralized adversarial training framework for multi-agent systems. Specifically, we devise two decentralized adversarial training algorithms by relying on two popular decentralized learning strategies--diffusion and consensus. We analyze the convergence properties of the proposed framework for strongly-convex, convex, and non-convex environments, and illustrate the enhanced robustness to adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Addresses vulnerability of machine learning to adversarial attacks
Develops decentralized adversarial training for multi-agent systems
Enhances robustness using diffusion and consensus strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized adversarial training over graphs
Min-max formulation for distributed learning
Diffusion and consensus-based training algorithms
🔎 Similar Papers
No similar papers found.
Y
Ying Cao
School of Engineering, École Polytechnique Fédérale de Lausanne
Elsa Rizk
Elsa Rizk
EPFL
machine learningfederated learningoptimization theoryinformation theory
Stefan Vlaski
Stefan Vlaski
Imperial College London
Distributed OptimizationMachine LearningStatistical Signal ProcessingMulti-Agent Systems
A
A. H. Sayed
School of Engineering, École Polytechnique Fédérale de Lausanne