π€ AI Summary
To address poor global model generalization and inefficient collaborative utilization of locally private parameters in personalized federated learning (pFL) under statistical heterogeneity, this paper proposes an energy-function-driven Mixture-of-Experts (MoE) framework. Methodologically, it introduces Energy-Based Models (EBMs) into the MoE architecture for the first time, designs a lightweight denoising mechanism to enable trust-aware selection and zero-shot reuse of client-specific modules across devices, and integrates model-splitting-based pFL with cross-device private-parameter distillation. Evaluated on six benchmark datasets under two statistical heterogeneity settings, the method consistently improves the performance of nine state-of-the-art pFL algorithms, achieving average accuracy gains of 1.2β3.8%. Crucially, these improvements incur only negligible communication and computational overhead.
π Abstract
Federated learning (FL) has gained widespread attention for its privacy-preserving and collaborative learning capabilities. Due to significant statistical heterogeneity, traditional FL struggles to generalize a shared model across diverse data domains. Personalized federated learning addresses this issue by dividing the model into a globally shared part and a locally private part, with the local model correcting representation biases introduced by the global model. Nevertheless, locally converged parameters more accurately capture domain-specific knowledge, and current methods overlook the potential benefits of these parameters. To address these limitations, we propose PM-MoE architecture. This architecture integrates a mixture of personalized modules and an energy-based personalized modules denoising, enabling each client to select beneficial personalized parameters from other clients. We applied the PM-MoE architecture to nine recent model-split-based personalized federated learning algorithms, achieving performance improvements with minimal additional training. Extensive experiments on six widely adopted datasets and two heterogeneity settings validate the effectiveness of our approach. The source code is available at url{https://github.com/dannis97500/PM-MOE}.