🤖 AI Summary
To address degraded model generalization in federated learning (FL) caused by data distribution heterogeneity and uncertainty—such as label and feature shifts—this paper proposes Distributionally Robust Federated Learning (DRFL). DRFL is the first framework to systematically integrate Distributionally Robust Optimization (DRO) into FL: it models distributional uncertainty via a Wasserstein ambiguity set and derives a tractable convex reformulation. Furthermore, we design a distributed algorithm based on the Alternating Direction Method of Multipliers (ADMM), ensuring convergence while substantially reducing communication overhead. Extensive experiments demonstrate that DRFL achieves robust accuracy gains of 3.2–7.8% over baselines (e.g., FedAvg) across diverse non-IID settings, exhibiting strong adaptability to various distribution shifts. This work establishes a novel paradigm for robust federated modeling in heterogeneous edge environments.
📝 Abstract
Federated learning (FL) aims to train machine learning (ML) models collaboratively using decentralized data, bypassing the need for centralized data aggregation. Standard FL models often assume that all data come from the same unknown distribution. However, in practical situations, decentralized data frequently exhibit heterogeneity. We propose a novel FL model, Distributionally Robust Federated Learning (DRFL), that applies distributionally robust optimization to overcome the challenges posed by data heterogeneity and distributional ambiguity. We derive a tractable reformulation for DRFL and develop a novel solution method based on the alternating direction method of multipliers (ADMM) algorithm to solve this problem. Our experimental results demonstrate that DRFL outperforms standard FL models under data heterogeneity and ambiguity.