🤖 AI Summary
This paper addresses Class-Incremental Source-Free Unsupervised Domain Adaptation (CI-SFUDA), where unlabeled target data arrive sequentially and no source samples or labels are accessible during training. This setting poses two core challenges: interference of source-class knowledge with representation learning for emerging target classes, and catastrophic forgetting of previously learned classes during new-class acquisition. To tackle these, we propose a Multi-Granularity Class Prototype Self-Organization and Topological Distillation framework. First, positive-class mining and pseudo-label refinement are performed via dual cumulative distribution modeling. Then, class prototype topologies are constructed in both source and target feature spaces, followed by cross-domain topological distillation to enable implicit, robust source-knowledge transfer. To our knowledge, this is the first fully source-free, unsupervised, and class-incremental adaptation method. Extensive experiments on three benchmark datasets demonstrate significant improvements in incremental accuracy and backward stability, achieving state-of-the-art performance.
📝 Abstract
This paper explores the Class-Incremental Source-Free Unsupervised Domain Adaptation (CI-SFUDA) problem, where the unlabeled target data come incrementally without access to labeled source instances. This problem poses two challenges, the disturbances of similar source-class knowledge to target-class representation learning and the new target knowledge to old ones. To address them, we propose the Multi-Granularity Class Prototype Topology Distillation (GROTO) algorithm, which effectively transfers the source knowledge to the unlabeled class-incremental target domain. Concretely, we design the multi-granularity class prototype self-organization module and prototype topology distillation module. Firstly, the positive classes are mined by modeling two accumulation distributions. Then, we generate reliable pseudo-labels by introducing multi-granularity class prototypes, and use them to promote the positive-class target feature self-organization. Secondly, the positive-class prototypes are leveraged to construct the topological structures of source and target feature spaces. Then, we perform the topology distillation to continually mitigate the interferences of new target knowledge to old ones. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performances on three public datasets.