🤖 AI Summary
This work addresses the challenge of systematic drift in federated learning caused by distorted local gradients under client data heterogeneity, which impedes global convergence. To mitigate this issue, we propose an Exploration-Convergence Gradient Re-aggregation (ECGR) mechanism inspired by swarm intelligence, which dynamically analyzes and re-aggregates local gradients from the client perspective. ECGR suppresses unstable gradient components while preserving informative updates, without introducing additional communication overhead. The method is seamlessly integrable into existing federated learning frameworks. Both theoretical analysis and extensive experiments validate its effectiveness, demonstrating that ECGR significantly enhances the convergence stability and performance of mainstream federated algorithms in heterogeneous settings, particularly on medical imaging benchmarks such as LC25000.
📝 Abstract
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its stability is fundamentally challenged by statistical heterogeneity in realistic deployments. Here, we show that client heterogeneity destabilizes FL primarily by distorting local gradient dynamics during client-side optimization, causing systematic drift that accumulates across communication rounds and impedes global convergence. This observation highlights local gradients as a key regulatory lever for stabilizing heterogeneous FL systems. Building on this insight, we develop a general client-side perspective that regulates local gradient contributions without incurring additional communication overhead. Inspired by swarm intelligence, we instantiate this perspective through Exploratory--Convergent Gradient Re-aggregation (ECGR), which balances well-aligned and misaligned gradient components to preserve informative updates while suppressing destabilizing effects. Theoretical analysis and extensive experiments, including evaluations on the LC25000 medical imaging dataset, demonstrate that regulating local gradient dynamics consistently stabilizes federated learning across state-of-the-art methods under heterogeneous data distributions.