π€ AI Summary
This work addresses the challenge of generalization to unseen clients in heterogeneous federated learning by explicitly identifying and jointly tackling two key issues: optimization divergence and performance divergence. To this end, the authors propose FedRD, a novel algorithm that employs a heterogeneity-aware parameter guidance mechanism to jointly optimize global model aggregation and local debiased classifier training. This approach effectively narrows the performance gap between participating and unseen clients. Extensive experiments on multiple public multi-domain datasets demonstrate that FedRD significantly outperforms existing methods, substantially enhancing the modelβs generalization capability to newly joined, heterogeneous clients.
π Abstract
Heterogeneous federated learning (HFL) aims to ensure effective and privacy-preserving collaboration among different entities. As newly joined clients require significant adjustments and additional training to align with the existing system, the problem of generalizing federated learning models to unseen clients under heterogeneous data has become progressively crucial. Consequently, we highlight two unsolved challenging issues in federated domain generalization: Optimization Divergence and Performance Divergence. To tackle the above challenges, we propose FedRD, a novel heterogeneity-aware federated learning algorithm that collaboratively utilizes parameter-guided global generalization aggregation and local debiased classification to reduce divergences, aiming to obtain an optimal global model for participating and unseen clients. Extensive experiments on public multi-domain datasets demonstrate that our approach exhibits a substantial performance advantage over competing baselines in addressing this specific problem.