π€ AI Summary
Existing continual learning (CL) paradigms assume continuous availability of labeled dataβa unrealistic assumption in real-world streaming scenarios. To address this, the paper proposes *Unsupervised Class-Incremental Learning* (AFCIL), a novel CL paradigm where classes emerge sequentially, unlabeled data arrive in a stream, and no labels are ever provided.
Method: The authors introduce *CrossWorld CL*, a framework that leverages external world knowledge as semantic priors to mitigate catastrophic forgetting and enable unsupervised novel-class discovery. It integrates cross-domain alignment, ImageNet-based semantic retrieval, knowledge-guided feature mapping, and a novel label-free replay mechanism.
Contribution/Results: This is the first method achieving fully label-free class-incremental learning. It significantly outperforms CLIP and state-of-the-art continual learning approaches on four standard benchmarks, demonstrating the efficacy and generalizability of harnessing world knowledge for unsupervised continual learning.
π Abstract
Despite significant progress in continual learning ranging from architectural novelty to clever strategies for mitigating catastrophic forgetting most existing methods rest on a strong but unrealistic assumption the availability of labeled data throughout the learning process. In real-world scenarios, however, data often arrives sequentially and without annotations, rendering conventional approaches impractical. In this work, we revisit the fundamental assumptions of continual learning and ask: Can current systems adapt when labels are absent and tasks emerge incrementally over time? To this end, we introduce Annotation-Free Class-Incremental Learning (AFCIL), a more realistic and challenging paradigm where unlabeled data arrives continuously, and the learner must incrementally acquire new classes without any supervision. To enable effective learning under AFCIL, we propose CrossWorld CL, a Cross Domain World Guided Continual Learning framework that incorporates external world knowledge as a stable auxiliary source. The method retrieves semantically related ImageNet classes for each downstream category, maps downstream and ImageNet features through a cross domain alignment strategy and finally introduce a novel replay strategy. This design lets the model uncover semantic structure without annotations while keeping earlier knowledge intact. Across four datasets, CrossWorld-CL surpasses CLIP baselines and existing continual and unlabeled learning methods, underscoring the benefit of world knowledge for annotation free continual learning.