🤖 AI Summary
To address poor domain generalization of models in federated learning caused by client data heterogeneity, this paper proposes a novel approach integrating label smoothing with decentralized training equilibrium. We design a client-level label smoothing mechanism to mitigate domain-specific overfitting and introduce a decentralized optimization framework based on dynamic budget allocation to adaptively balance communication and computational loads across clients. The method operates without a central server, thereby enhancing privacy preservation and system robustness. Evaluated on four major cross-domain benchmarks—PACS, VLCS, OfficeHome, and TerraInc—the proposed method achieves state-of-the-art performance across three core metrics. Notably, it significantly improves the average generalization accuracy of the global model, demonstrating both effectiveness and broad applicability under non-i.i.d. data distributions.
📝 Abstract
In this paper, we propose a novel approach, Federated Domain Generalization with Label Smoothing and Balanced Decentralized Training (FedSB), to address the challenges of data heterogeneity within a federated learning framework. FedSB utilizes label smoothing at the client level to prevent overfitting to domain-specific features, thereby enhancing generalization capabilities across diverse domains when aggregating local models into a global model. Additionally, FedSB incorporates a decentralized budgeting mechanism which balances training among clients, which is shown to improve the performance of the aggregated global model. Extensive experiments on four commonly used multi-domain datasets, PACS, VLCS, OfficeHome, and TerraInc, demonstrate that FedSB outperforms competing methods, achieving state-of-the-art results on three out of four datasets, indicating the effectiveness of FedSB in addressing data heterogeneity.