Gains: Fine-grained Federated Domain Adaptation in Open Set

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In open-world federated learning (FL), continuously joining clients introduce challenges in detecting and integrating emerging knowledge. To address this, we propose Gains, a fine-grained federated domain adaptation method. Unlike coarse-grained approaches, Gains decouples the model into an encoder—sensitive to domain shifts—and a classifier—responsible for class-incremental recognition—enabling precise detection of novel knowledge. It further introduces contribution-driven weighted aggregation and anti-forgetting regularization to jointly preserve source-domain stability and enhance target-domain adaptability during dynamic knowledge integration. By incorporating feature sensitivity analysis and model decomposition, Gains is seamlessly embedded into open-set FL frameworks. Extensive experiments across three canonical data shift scenarios (covariate, label, and concept shifts) demonstrate that Gains consistently outperforms state-of-the-art baselines, simultaneously improving performance on both source- and target-domain clients.

Technology Category

Application Category

📝 Abstract
Conventional federated learning (FL) assumes a closed world with a fixed total number of clients. In contrast, new clients continuously join the FL process in real-world scenarios, introducing new knowledge. This raises two critical demands: detecting new knowledge, i.e., knowledge discovery, and integrating it into the global model, i.e., knowledge adaptation. Existing research focuses on coarse-grained knowledge discovery, and often sacrifices source domain performance and adaptation efficiency. To this end, we propose a fine-grained federated domain adaptation approach in open set (Gains). Gains splits the model into an encoder and a classifier, empirically revealing features extracted by the encoder are sensitive to domain shifts while classifier parameters are sensitive to class increments. Based on this, we develop fine-grained knowledge discovery and contribution-driven aggregation techniques to identify and incorporate new knowledge. Additionally, an anti-forgetting mechanism is designed to preserve source domain performance, ensuring balanced adaptation. Experimental results on multi-domain datasets across three typical data-shift scenarios demonstrate that Gains significantly outperforms other baselines in performance for both source-domain and target-domain clients. Code is available at: https://github.com/Zhong-Zhengyi/Gains.
Problem

Research questions and friction points this paper is trying to address.

Detecting new knowledge from continuously joining clients
Integrating new knowledge efficiently into global model
Preserving source domain performance during adaptation process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained knowledge discovery for domain shifts
Contribution-driven aggregation to integrate new knowledge
Anti-forgetting mechanism preserves source domain performance
🔎 Similar Papers
No similar papers found.
Zhengyi Zhong
Zhengyi Zhong
National University of Defense Technology
federated learningdomain adaptioncontinual learningmachine unlearning
W
Wenzheng Jiang
Laboratory for Big Data and Decision, National University of Defense Technology
W
Weidong Bao
Laboratory for Big Data and Decision, National University of Defense Technology
J
Ji Wang
Laboratory for Big Data and Decision, National University of Defense Technology
Cheems Wang
Cheems Wang
Tsinghua University, Universiteit van Amsterdam, Sichuan University
Large ModelsMeta LearningMulti-Task LearningReinforcement LearningGenerative Modeling
G
Guanbo Wang
Department of Computer Science and Technology, Tsinghua University
Y
Yongheng Deng
Department of Computer Science and Technology, Tsinghua University
Ju Ren
Ju Ren
Department of Computer Science and Technology, Tsinghua University
Internet-of-ThingsEdge Computing/IntelligenceSecurity and Privacy