🤖 AI Summary
This work addresses the challenges of convergence difficulty and performance degradation in federated learning caused by non-independent and identically distributed (Non-IID) data and imbalanced client participation. To mitigate these issues, the authors propose a novel approach that integrates dynamic batch size scheduling with selective proximal regularization. The method adaptively adjusts local batch sizes to align with heterogeneous client resources and introduces a proximal correction term specifically for clients with small batches, thereby enhancing training stability. Evaluated under extreme Non-IID settings on benchmark datasets such as CIFAR-10 and UCI-HAR, the proposed algorithm consistently outperforms existing methods—including FedBS, FedGA, MOON, and FedProx—demonstrating smoother convergence behavior and improved robustness.
📝 Abstract
Federated learning (FL) enables a set of distributed clients to jointly train machine learning models while preserving their local data privacy, making it attractive for applications in healthcare, finance, mobility, and smart-city systems. However, FL faces several challenges, including statistical heterogeneity and uneven client participation, which can degrade convergence and model quality. In this work, we propose FedPBS, an FL algorithm that couples complementary ideas from FedBS and FedProx to address these challenges. FedPBS dynamically adapts batch sizes to client resources to support balanced and scalable participation, and selectively applies a proximal correction to small-batch clients to stabilize local updates and reduce divergence from the global model. Experiments on benchmarking datasets such as CIFAR-10 and UCI-HAR under highly non-IID settings demonstrate that FedPBS consistently outperforms state-of-the-art methods, including FedBS, FedGA, MOON, and FedProx. The results demonstrate robust performance gains under extreme data heterogeneity, with smooth loss curves indicating stable convergence across diverse federated environments. FedPBS consistently outperforms state-of-the-art federated learning baselines on UCI-HAR and CIFAR-10 under severe non-IID conditions while maintaining stable and reliable convergence.