Power to the Clients: Federated Learning in a Dictatorship Setting

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a novel class of malicious clients in federated learning—“dictator clients”—capable of unilaterally nullifying all other clients’ model updates without accessing their private data, thereby monopolizing global model evolution. We formally define this attack model and systematically analyze its efficacy under complex multi-client interactions—including collusion, competition, and betrayal. Through rigorous convergence analysis and empirical evaluation on CV and NLP tasks, we demonstrate that dictator attacks can fully hijack the direction of global model updates, driving the model far from the optimal solution. The work exposes a critical security vulnerability in decentralized training and introduces the first theoretical framework for quantitatively characterizing the damage boundary of dictator attacks. This framework provides foundational support for designing robust federated learning mechanisms resilient to such centralized manipulation.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a promising paradigm for decentralized model training, enabling multiple clients to collaboratively learn a shared model without exchanging their local data. However, the decentralized nature of FL also introduces vulnerabilities, as malicious clients can compromise or manipulate the training process. In this work, we introduce dictator clients, a novel, well-defined, and analytically tractable class of malicious participants capable of entirely erasing the contributions of all other clients from the server model, while preserving their own. We propose concrete attack strategies that empower such clients and systematically analyze their effects on the learning process. Furthermore, we explore complex scenarios involving multiple dictator clients, including cases where they collaborate, act independently, or form an alliance in order to ultimately betray one another. For each of these settings, we provide a theoretical analysis of their impact on the global model's convergence. Our theoretical algorithms and findings about the complex scenarios including multiple dictator clients are further supported by empirical evaluations on both computer vision and natural language processing benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Dictator clients can erase other clients' model contributions
Attack strategies enable malicious control in federated learning
Multiple dictator clients impact global model convergence theoretically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces dictator clients to erase other contributions
Proposes attack strategies for malicious client manipulation
Analyzes multiple dictator collaboration and betrayal scenarios
🔎 Similar Papers
No similar papers found.
M
Mohammadsajad Alipour
Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
Mohammad Mohammadi Amiri
Mohammad Mohammadi Amiri
Assistant Professor at Rensselaer Polytechnic Institute
Machine LearningData ScienceOptimizationInformation TheoryWireless Communications