Technical Report for the 5th CLVision Challenge at CVPR: Addressing the Class-Incremental with Repetition using Unlabeled Data -- 4th Place Solution

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in Class-Incremental Rehearsal (CIR), where classes reappear across rounds, only a subset is observed per round, and unlabeled data contain both label noise and unknown-class samples, this work proposes the first CIR framework integrating knowledge distillation with dynamic curriculum-based pseudo-label mining to robustly identify previously seen classes from noisy unlabeled data. Our method jointly leverages confidence-threshold filtering, consistency regularization, and progressive data cleaning to enhance cross-round class stability. Evaluated on the CVPR CLVision Challenge, our approach achieves 16.68% and 21.19% accuracy in pre-selection and final evaluation, respectively—representing a 9.39% absolute improvement over the baseline. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
This paper outlines our approach to the 5th CLVision challenge at CVPR, which addresses the Class-Incremental with Repetition (CIR) scenario. In contrast to traditional class incremental learning, this novel setting introduces unique challenges and research opportunities, particularly through the integration of unlabeled data into the training process. In the CIR scenario, encountered classes may reappear in later learning experiences, and each experience may involve only a subset of the overall class distribution. Additionally, the unlabeled data provided during training may include instances of unseen classes, or irrelevant classes which should be ignored. Our approach focuses on retaining previously learned knowledge by utilizing knowledge distillation and pseudo-labeling techniques. The key characteristic of our method is the exploitation of unlabeled data during training, in order to maintain optimal performance on instances of previously encountered categories and reduce the detrimental effects of catastrophic forgetting. Our method achieves an average accuracy of 16.68% during the pre-selection phase and 21.19% during the final evaluation phase, outperforming the baseline accuracy of 9.39%. We provide the implementation code at https://github.com/panagiotamoraiti/continual-learning-challenge-2024 .
Problem

Research questions and friction points this paper is trying to address.

Addresses Class-Incremental Learning with Repetition (CIR) challenges.
Integrates unlabeled data to enhance training and reduce forgetting.
Utilizes knowledge distillation and pseudo-labeling for knowledge retention.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes unlabeled data for training enhancement
Employs knowledge distillation to retain knowledge
Implements pseudo-labeling to mitigate catastrophic forgetting
🔎 Similar Papers