Multi-Granularity Class Prototype Topology Distillation for Class-Incremental Source-Free Unsupervised Domain Adaptation

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses Class-Incremental Source-Free Unsupervised Domain Adaptation (CI-SFUDA), where unlabeled target data arrive sequentially and no source samples or labels are accessible during training. This setting poses two core challenges: interference of source-class knowledge with representation learning for emerging target classes, and catastrophic forgetting of previously learned classes during new-class acquisition. To tackle these, we propose a Multi-Granularity Class Prototype Self-Organization and Topological Distillation framework. First, positive-class mining and pseudo-label refinement are performed via dual cumulative distribution modeling. Then, class prototype topologies are constructed in both source and target feature spaces, followed by cross-domain topological distillation to enable implicit, robust source-knowledge transfer. To our knowledge, this is the first fully source-free, unsupervised, and class-incremental adaptation method. Extensive experiments on three benchmark datasets demonstrate significant improvements in incremental accuracy and backward stability, achieving state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
This paper explores the Class-Incremental Source-Free Unsupervised Domain Adaptation (CI-SFUDA) problem, where the unlabeled target data come incrementally without access to labeled source instances. This problem poses two challenges, the disturbances of similar source-class knowledge to target-class representation learning and the new target knowledge to old ones. To address them, we propose the Multi-Granularity Class Prototype Topology Distillation (GROTO) algorithm, which effectively transfers the source knowledge to the unlabeled class-incremental target domain. Concretely, we design the multi-granularity class prototype self-organization module and prototype topology distillation module. Firstly, the positive classes are mined by modeling two accumulation distributions. Then, we generate reliable pseudo-labels by introducing multi-granularity class prototypes, and use them to promote the positive-class target feature self-organization. Secondly, the positive-class prototypes are leveraged to construct the topological structures of source and target feature spaces. Then, we perform the topology distillation to continually mitigate the interferences of new target knowledge to old ones. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performances on three public datasets.
Problem

Research questions and friction points this paper is trying to address.

Addresses class-incremental learning without labeled source data.
Mitigates interference between source and target class knowledge.
Reduces shocks from new target knowledge on old knowledge.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-granularity class prototype self-organization
Prototype topology distillation for knowledge transfer
Pseudo-label generation for class-incremental adaptation
🔎 Similar Papers
No similar papers found.
P
Peihua Deng
Hangzhou Dianzi University, Hangzhou, Zhejiang, China
Jiehua Zhang
Jiehua Zhang
University of Oulu
Deep learningObject detectionModel quantization
X
Xichun Sheng
Macao Polytechnic University, Macao, China
Chenggang Yan
Chenggang Yan
Hangzhou Dianzi University
Y
Yaoqi Sun
Hangzhou Dianzi University, Hangzhou, Zhejiang, China
Y
Ying Fu
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
L
Liang Li
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China