On Model Protection in Federated Learning against Eavesdropping Attacks

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic confidentiality of client model updates against eavesdropping attacks during transmission in federated learning. We systematically uncover, for the first time, the inherent model-protection mechanism embedded in the Federated Averaging (FedAvg) protocol, modeling it information-theoretically and analyzing it theoretically to quantify how client selection probability, local objective function structure, and aggregation methodology affect the difficulty of model reconstruction. Using reconstruction accuracy of the global model by an eavesdropper as the evaluation metric, we conduct numerical experiments comparing FedAvg with differential privacy baselines. Results show that standard FedAvg provides non-trivial inherent confidentiality: under typical settings, eavesdroppers achieve less than 60% reconstruction accuracy—significantly below server-side convergence performance. This work establishes implicit security properties of federated learning protocols, offering a novel perspective for lightweight privacy preservation.

Technology Category

Application Category

📝 Abstract
In this study, we investigate the protection offered by federated learning algorithms against eavesdropping adversaries. In our model, the adversary is capable of intercepting model updates transmitted from clients to the server, enabling it to create its own estimate of the model. Unlike previous research, which predominantly focuses on safeguarding client data, our work shifts attention protecting the client model itself. Through a theoretical analysis, we examine how various factors, such as the probability of client selection, the structure of local objective functions, global aggregation at the server, and the eavesdropper's capabilities, impact the overall level of protection. We further validate our findings through numerical experiments, assessing the protection by evaluating the model accuracy achieved by the adversary. Finally, we compare our results with methods based on differential privacy, underscoring their limitations in this specific context.
Problem

Research questions and friction points this paper is trying to address.

Protect federated learning models from eavesdropping attacks
Analyze factors affecting model protection in federated learning
Compare protection methods with differential privacy limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Protects client models in federated learning
Analyzes factors like client selection probability
Compares with differential privacy limitations
🔎 Similar Papers
No similar papers found.