SecureAFL: Secure Asynchronous Federated Learning

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of asynchronous federated learning to poisoning attacks, noting that existing defenses are either ineffective or rely on strong assumptions about server capabilities. To overcome these limitations, the authors propose SecureAFL, a novel framework that, for the first time in asynchronous federated learning, integrates anomaly detection in client updates, estimation of contributions from missing clients, and Byzantine-robust aggregation—such as coordinate-wise median—without requiring a trusted or powerful server. The framework effectively mitigates sophisticated poisoning attacks under realistic deployment conditions. Experimental results across multiple real-world datasets demonstrate that SecureAFL significantly enhances both model accuracy and robustness compared to state-of-the-art baselines.
📝 Abstract
Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model via a server without sharing their private training data. In traditional FL, the system follows a synchronous approach, where the server waits for model updates from numerous clients before aggregating them to update the global model. However, synchronous FL is hindered by the straggler problem. To address this, the asynchronous FL architecture allows the server to update the global model immediately upon receiving any client's local model update. Despite its advantages, the decentralized nature of asynchronous FL makes it vulnerable to poisoning attacks. Several defenses tailored for asynchronous FL have been proposed, but these mechanisms remain susceptible to advanced attacks or rely on unrealistic server assumptions. In this paper, we introduce SecureAFL, an innovative framework designed to secure asynchronous FL against poisoning attacks. SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients. Additionally, it utilizes Byzantine-robust aggregation techniques, such as coordinate-wise median, to integrate the received and estimated updates. Extensive experiments on various real-world datasets demonstrate the effectiveness of SecureAFL.
Problem

Research questions and friction points this paper is trying to address.

asynchronous federated learning
poisoning attacks
Byzantine robustness
model security
federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

asynchronous federated learning
poisoning attacks
Byzantine-robust aggregation
anomalous update detection
missing client estimation
🔎 Similar Papers
No similar papers found.