🤖 AI Summary
In open-world federated learning (FL), continuously joining clients introduce challenges in detecting and integrating emerging knowledge. To address this, we propose Gains, a fine-grained federated domain adaptation method. Unlike coarse-grained approaches, Gains decouples the model into an encoder—sensitive to domain shifts—and a classifier—responsible for class-incremental recognition—enabling precise detection of novel knowledge. It further introduces contribution-driven weighted aggregation and anti-forgetting regularization to jointly preserve source-domain stability and enhance target-domain adaptability during dynamic knowledge integration. By incorporating feature sensitivity analysis and model decomposition, Gains is seamlessly embedded into open-set FL frameworks. Extensive experiments across three canonical data shift scenarios (covariate, label, and concept shifts) demonstrate that Gains consistently outperforms state-of-the-art baselines, simultaneously improving performance on both source- and target-domain clients.
📝 Abstract
Conventional federated learning (FL) assumes a closed world with a fixed total number of clients. In contrast, new clients continuously join the FL process in real-world scenarios, introducing new knowledge. This raises two critical demands: detecting new knowledge, i.e., knowledge discovery, and integrating it into the global model, i.e., knowledge adaptation. Existing research focuses on coarse-grained knowledge discovery, and often sacrifices source domain performance and adaptation efficiency. To this end, we propose a fine-grained federated domain adaptation approach in open set (Gains). Gains splits the model into an encoder and a classifier, empirically revealing features extracted by the encoder are sensitive to domain shifts while classifier parameters are sensitive to class increments. Based on this, we develop fine-grained knowledge discovery and contribution-driven aggregation techniques to identify and incorporate new knowledge. Additionally, an anti-forgetting mechanism is designed to preserve source domain performance, ensuring balanced adaptation. Experimental results on multi-domain datasets across three typical data-shift scenarios demonstrate that Gains significantly outperforms other baselines in performance for both source-domain and target-domain clients. Code is available at: https://github.com/Zhong-Zhengyi/Gains.