🤖 AI Summary
To mitigate catastrophic forgetting in online continual learning, this paper proposes the Holographic Proxy Contrastive Replay (HPCR) framework. HPCR introduces a novel three-component协同 mechanism: (1) conditional anchor sampling–driven proxy contrastive learning to enhance feature discriminability; (2) gradient-aware decoupled dual-temperature scheduling to dynamically adapt generalization requirements across feature and logits spaces; and (3) multi-stage knowledge distillation to strengthen retention of prior task knowledge. Evaluated on four standard online continual learning benchmarks, HPCR consistently outperforms state-of-the-art methods, achieving average accuracy gains of 2.1–4.7 percentage points. The framework significantly alleviates forgetting while improving model stability and adaptability under streaming data conditions.
📝 Abstract
Online continual learning, aimed at developing a neural network that continuously learns new data from a single pass over an online data stream, generally suffers from catastrophic forgetting. Existing replay-based methods alleviate forgetting by replaying partial old data in a proxy-based or contrastive-based replay manner, each with its own shortcomings. Our previous work proposes a novel replay-based method called proxy-based contrastive replay (PCR), which handles the shortcomings by achieving complementary advantages of both replay manners. In this work, we further conduct gradient and limitation analysis of PCR. The analysis results show that PCR still can be further improved in feature extraction, generalization, and anti-forgetting capabilities of the model. Hence, we develop a more advanced method named holistic proxy-based contrastive replay (HPCR). HPCR consists of three components, each tackling one of the limitations of PCR. The contrastive component conditionally incorporates anchor-to-sample pairs to PCR, improving the feature extraction ability. The second is a temperature component that decouples the temperature coefficient into two parts based on their gradient impacts and sets different values for them to enhance the generalization ability. The third is a distillation component that constrains the learning process with additional loss terms to improve the anti-forgetting ability. Experiments on four datasets consistently demonstrate the superiority of HPCR over various state-of-the-art methods.