FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In asynchronous federated learning, model staleness often degrades performance, yet existing approaches assess staleness only through coarse-grained round differences, neglecting the internal state of the model. This work proposes FedPSA, a novel framework that introduces a fine-grained staleness metric based on parameter sensitivity and incorporates a dynamic momentum queue to continuously identify the current training phase, thereby adaptively adjusting tolerance for stale updates. Experimental results demonstrate that FedPSA consistently outperforms baseline methods across multiple datasets, achieving performance gains of up to 6.37% and surpassing the current state-of-the-art by 1.93%.

Technology Category

Application Category

📝 Abstract
Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster training speed compared to traditional federated learning. However, due to the staleness introduced by the asynchronous process, its performance may degrade in some scenarios. Existing methods often use the round difference between the current model and the global model as the sole measure of staleness, which is coarse-grained and lacks observation of the model itself, thereby limiting the performance ceiling of asynchronous methods. In this paper, we propose FedPSA (Parameter Sensitivity-based Asynchronous Federated Learning), a more fine-grained AFL framework that leverages parameter sensitivity to measure model obsolescence and establishes a dynamic momentum queue to assess the current training phase in real time, thereby adjusting the tolerance for outdated information dynamically. Extensive experiments on multiple datasets and comparisons with various methods demonstrate the superior performance of FedPSA, achieving up to 6.37\% improvement over baseline methods and 1.93\% over the current state-of-the-art method.
Problem

Research questions and friction points this paper is trying to address.

Asynchronous Federated Learning
staleness
model obsolescence
parameter sensitivity
behavioral staleness
Innovation

Methods, ideas, or system contributions that make the work stand out.

asynchronous federated learning
parameter sensitivity
behavioral staleness
dynamic momentum queue
model obsolescence
🔎 Similar Papers
No similar papers found.