🤖 AI Summary
Open-world class-incremental learning (CIL) faces dual challenges: mitigating catastrophic forgetting of previously learned classes while simultaneously rejecting inputs from unseen, out-of-distribution (OOD) classes. Existing approaches rely on replaying cached historical data—compromising privacy, scalability, and training efficiency. This paper proposes a memory-free continual learning framework that, for the first time, systematically demonstrates that post-hoc OOD detection methods—such as energy-based scoring and Mahalanobis distance—can effectively replace conventional replay buffers. Leveraging a multi-head network architecture, our approach jointly performs task identification and OOD detection during inference. Evaluated on CIFAR-10/100 and Tiny ImageNet, it achieves CIL accuracy and OOD detection AUC comparable to or exceeding state-of-the-art buffer-based methods, while reducing training time by 37% and eliminating all historical data storage. The method thus advances performance, privacy preservation, and scalability in open-world continual learning.
📝 Abstract
Class-incremental learning (CIL) poses significant challenges in open-world scenarios, where models must not only learn new classes over time without forgetting previous ones but also handle inputs from unknown classes that a closed-set model would misclassify. Recent works address both issues by (i)~training multi-head models using the task-incremental learning framework, and (ii) predicting the task identity employing out-of-distribution (OOD) detectors. While effective, the latter mainly relies on joint training with a memory buffer of past data, raising concerns around privacy, scalability, and increased training time. In this paper, we present an in-depth analysis of post-hoc OOD detection methods and investigate their potential to eliminate the need for a memory buffer. We uncover that these methods, when applied appropriately at inference time, can serve as a strong substitute for buffer-based OOD detection. We show that this buffer-free approach achieves comparable or superior performance to buffer-based methods both in terms of class-incremental learning and the rejection of unknown samples. Experimental results on CIFAR-10, CIFAR-100 and Tiny ImageNet datasets support our findings, offering new insights into the design of efficient and privacy-preserving CIL systems for open-world settings.