IDER: IDempotent Experience Replay for Reliable Continual Learning

๐Ÿ“… 2026-02-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses catastrophic forgetting and poor predictive reliability in continual learning by introducing, for the first time, the principle of idempotence. The authors propose a lightweight, plug-and-play experience replay framework that enforces idempotent behavior during model updates through idempotent experience replay and an idempotent distillation loss. This approach ensures that the updated model remains consistent with respect to the current data stream and feeds its current outputs back into the previous model to minimize output discrepancies. The method seamlessly integrates with mainstream replay strategies and simultaneously enhances accuracy, robustness against forgetting, and uncertainty calibrationโ€”all without significantly increasing computational overhead. Extensive experiments across multiple benchmarks validate idempotence as a fundamental principle for building efficient and trustworthy continual learning systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Catastrophic forgetting, the tendency of neural networks to forget previously learned knowledge when learning new tasks, has been a major challenge in continual learning (CL). To tackle this challenge, CL methods have been proposed and shown to reduce forgetting. Furthermore, CL models deployed in mission-critical settings can benefit from uncertainty awareness by calibrating their predictions to reliably assess their confidences. However, existing uncertainty-aware continual learning methods suffer from high computational overhead and incompatibility with mainstream replay methods. To address this, we propose idempotent experience replay (IDER), a novel approach based on the idempotent property where repeated function applications yield the same output. Specifically, we first adapt the training loss to make model idempotent on current data streams. In addition, we introduce an idempotence distillation loss. We feed the output of the current model back into the old checkpoint and then minimize the distance between this reprocessed output and the original output of the current model. This yields a simple and effective new baseline for building reliable continual learners, which can be seamlessly integrated with other CL approaches. Extensive experiments on different CL benchmarks demonstrate that IDER consistently improves prediction reliability while simultaneously boosting accuracy and reducing forgetting. Our results suggest the potential of idempotence as a promising principle for deploying efficient and trustworthy continual learning systems in real-world applications.Our code is available at https://github.com/YutingLi0606/Idempotent-Continual-Learning.
Problem

Research questions and friction points this paper is trying to address.

catastrophic forgetting
continual learning
uncertainty awareness
experience replay
reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

idempotent learning
experience replay
continual learning
uncertainty calibration
catastrophic forgetting
Z
Zhanwang Liu
School of Computer Science, Shanghai Jiao Tong University
Y
Yuting Li
School of Computer Science, Shanghai Jiao Tong University
H
Haoyuan Gao
School of Computer Science, Shanghai Jiao Tong University
Yexin Li
Yexin Li
State Key Laboratory of General Artificial Intelligence BIGAI
reinforcement learningmulti-agent systemmulti-armed banditsdata mining
Linghe Kong
Linghe Kong
Shanghai Jiao Tong University
Internet of ThingsMobile computingBig data
L
Lichao Sun
Lehigh University
W
Weiran Huang
School of Computer Science, Shanghai Jiao Tong University; Shanghai Innovation Institute