🤖 AI Summary
In personalized federated learning with dynamically joining clients, existing methods struggle to simultaneously preserve performance stability for incumbent clients and enable rapid personalization for newly joined clients, while lacking mechanisms for cross-batch knowledge transfer.
Method: We propose a data-free continual adaptation framework built upon a centralized hypernetwork that generates client-specific subnetworks. To stabilize historical knowledge, we introduce batch-specific binary masks; to enable cross-batch knowledge transfer without raw data, we integrate DeepInversion for synthetic data replay.
Contribution/Results: This is the first approach enabling continual personalized adaptation in dynamic federated settings without requiring original client data. Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate significant improvements over state-of-the-art methods—enhancing model resource efficiency, maintaining accuracy for legacy clients, and accelerating convergence for new clients.
📝 Abstract
Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, offering a significant privacy benefit. However, most existing Personalized Federated Learning (pFL) methods assume a static client participation, which does not reflect real-world scenarios where new clients may continuously join the federated system (i.e., dynamic client onboarding). In this paper, we explore a practical scenario in which a new batch of clients is introduced incrementally while the learning task remains unchanged. This dynamic environment poses various challenges, including preserving performance for existing clients without retraining and enabling efficient knowledge transfer between client batches. To address these issues, we propose Personalized Federated Data-Free Sub-Hypernetwork (pFedDSH), a novel framework based on a central hypernetwork that generates personalized models for each client via embedding vectors. To maintain knowledge stability for existing clients, pFedDSH incorporates batch-specific masks, which activate subsets of neurons to preserve knowledge. Furthermore, we introduce a data-free replay strategy motivated by DeepInversion to facilitate backward transfer, enhancing existing clients' performance without compromising privacy. Extensive experiments conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that pFedDSH outperforms the state-of-the-art pFL and Federated Continual Learning baselines in our investigation scenario. Our approach achieves robust performance stability for existing clients, as well as adaptation for new clients and efficient utilization of neural resources.