Efficient Single-Step Framework for Incremental Class Learning in Neural Networks

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe catastrophic forgetting, high computational overhead, and reliance on multi-iteration optimization in class-incremental learning, this paper proposes a single-step incremental learning framework tailored for resource-constrained scenarios. Our method freezes the pre-trained feature extractor to ensure representation stability, employs a compact compressed buffer for efficient historical sample storage, and adopts a non-iterative, closed-form linear classifier for immediate adaptation to novel classes. Crucially, the framework entirely avoids model fine-tuning, drastically reducing training time and memory footprint. On standard benchmarks, it achieves accuracy competitive with state-of-the-art methods while accelerating training by multiple orders of magnitude. To our knowledge, this is the first approach that simultaneously mitigates catastrophic forgetting at the classifier level and enables end-to-end lightweight deployment—without compromising accuracy.

Technology Category

Application Category

📝 Abstract
Incremental learning remains a critical challenge in machine learning, as models often struggle with catastrophic forgetting -the tendency to lose previously acquired knowledge when learning new information. These challenges are even more pronounced in resource-limited settings. Many existing Class Incremental Learning (CIL) methods achieve high accuracy by continually adapting their feature representations; however, they often require substantial computational resources and complex, iterative training procedures. This work introduces CIFNet (Class Incremental and Frugal Network), a novel CIL approach that addresses these limitations by offering a highly efficient and sustainable solution. CIFNet's key innovation lies in its novel integration of several existing, yet separately explored, components: a pre-trained and frozen feature extractor, a compressed data buffer, and an efficient non-iterative one-layer neural network for classification. A pre-trained and frozen feature extractor eliminates computationally expensive fine-tuning of the backbone. This, combined with a compressed buffer for efficient memory use, enables CIFNet to perform efficient class-incremental learning through a single-step optimization process on fixed features, minimizing computational overhead and training time without requiring multiple weight updates. Experiments on benchmark datasets confirm that CIFNet effectively mitigates catastrophic forgetting at the classifier level, achieving high accuracy comparable to that of existing state-of-the-art methods, while substantially improving training efficiency and sustainability. CIFNet represents a significant advancement in making class-incremental learning more accessible and pragmatic in environments with limited resources, especially when strong pre-trained feature extractors are available.
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in incremental class learning
Reduce computational overhead in resource-limited settings
Enable efficient single-step optimization without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained frozen feature extractor
Compressed data buffer
Efficient non-iterative neural network
🔎 Similar Papers
No similar papers found.