PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in online continual learning (OCL) caused by single-pass data access, this paper proposes a lightweight prompt-based OCL framework that requires no historical data replay. Methodologically, it introduces: (1) a universal prompt generator coupled with a learnable scale-and-shift module for parameter-efficient task adaptation; (2) a hard-then-soft two-stage parameter update mechanism that preserves generalization on previous tasks without storing exemplars; and (3) a synergistic integration of pretrained models, prompt tuning, and parameter isolation to curb both trainable parameter growth and inference throughput bottlenecks. Evaluated on benchmarks including CIFAR-100 and ImageNet-R, the method achieves state-of-the-art performance—outperforming existing replay-based and prompt-based OCL approaches—with significantly lower parameter overhead and higher inference throughput.

Technology Category

Application Category

📝 Abstract
The data privacy constraint in online continual learning (OCL), where the data can be seen only once, complicates the catastrophic forgetting problem in streaming data. A common approach applied by the current SOTAs in OCL is with the use of memory saving exemplars or features from previous classes to be replayed in the current task. On the other hand, the prompt-based approach performs excellently in continual learning but with the cost of a growing number of trainable parameters. The first approach may not be applicable in practice due to data openness policy, while the second approach has the issue of throughput associated with the streaming data. In this study, we propose a novel prompt-based method for online continual learning that includes 4 main components: (1) single light-weight prompt generator as a general knowledge, (2) trainable scaler-and-shifter as specific knowledge, (3) pre-trained model (PTM) generalization preserving, and (4) hard-soft updates mechanism. Our proposed method achieves significantly higher performance than the current SOTAs in CIFAR100, ImageNet-R, ImageNet-A, and CUB dataset. Our complexity analysis shows that our method requires a relatively smaller number of parameters and achieves moderate training time, inference time, and throughput. For further study, the source code of our method is available at https://github.com/anwarmaxsum/PROL.
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in streaming data without rehearsal
Reduce trainable parameters in prompt-based continual learning
Maintain throughput efficiency in online continual learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Light-weight prompt generator for general knowledge
Trainable scaler-shifter for specific knowledge
Hard-soft updates mechanism preserves PTM generalization
🔎 Similar Papers
No similar papers found.