🤖 AI Summary
Existing federated prototype learning methods are constrained by a single global prototype, struggling to simultaneously preserve feature fidelity and discriminability under data and model heterogeneity. To address this limitation, this work proposes FedDBP, which employs a dual-branch feature projector on the client side, integrating L2 alignment with contrastive learning to enhance feature quality. On the server side, it leverages Fisher information estimation to dynamically weight feature channels for personalized fusion of global prototypes. This approach overcomes the shortcomings of conventional methods—namely, limited feature expressiveness and rigid prototype representations—and achieves significant performance gains over ten state-of-the-art baselines across multiple benchmarks, demonstrating its effectiveness and robustness in heterogeneous federated learning settings.
📝 Abstract
Federated prototype learning (FPL), as a solution to heterogeneous federated learning (HFL), effectively alleviates the challenges of data and model heterogeneity.However, existing FPL methods fail to balance the fidelity and discriminability of the feature, and are limited by a single global prototype. In this paper, we propose FedDBP, a novel FPL method to address the above issues. On the client-side, we design a Dual-Branch feature projector that employs L2 alignment and contrastive learning simultaneously, thereby ensuring both the fidelity and discriminability of local features. On the server-side, we introduce a Personalized global prototype fusion approach that leverages Fisher information to identify the important channels of local prototypes. Extensive experiments demonstrate the superiority of FedDBP over ten existing advanced methods.