Randomized Neural Network with Adaptive Forward Regularization for Online Task-free Class Incremental Learning

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address distribution shift and catastrophic forgetting induced by non-i.i.d. data streams in online task-free class-incremental learning (OTCIL), this paper proposes edRVFL-kF—a framework built upon the Random Vector Functional Link (RVFL) network, integrating recursive convex optimization and tunable forward regularization to enable single-shot closed-form updates and adaptive learning rates. Furthermore, we introduce edRVFL-kF-Bayes, which incorporates a Bayesian adaptive mechanism to automatically estimate regularization strength, eliminating manual hyperparameter tuning. The method operates without data replay or label supervision, effectively leveraging unsupervised knowledge to mitigate forgetting. Evaluated on two image datasets across six metrics, it consistently outperforms baselines with stable dynamic performance. Ablation and compatibility studies confirm its effectiveness and robustness under long task sequences.

Technology Category

Application Category

📝 Abstract
Class incremental learning (CIL) requires an agent to learn distinct tasks consecutively with knowledge retention against forgetting. Problems impeding the practical applications of CIL methods are twofold: (1) non-i.i.d batch streams and no boundary prompts to update, known as the harsher online task-free CIL (OTCIL) scenario; (2) CIL methods suffer from memory loss in learning long task streams, as shown in Fig. 1 (a). To achieve efficient decision-making and decrease cumulative regrets during the OTCIL process, a randomized neural network (Randomized NN) with forward regularization (-F) is proposed to resist forgetting and enhance learning performance. This general framework integrates unsupervised knowledge into recursive convex optimization, has no learning dissipation, and can outperform the canonical ridge style (-R) in OTCIL. Based on this framework, we derive the algorithm of the ensemble deep random vector functional link network (edRVFL) with adjustable forward regularization (-kF), where k mediates the intensity of the intervention. edRVFL-kF generates one-pass closed-form incremental updates and variable learning rates, effectively avoiding past replay and catastrophic forgetting while achieving superior performance. Moreover, to curb unstable penalties caused by non-i.i.d and mitigate intractable tuning of -kF in OTCIL, we improve it to the plug-and-play edRVFL-kF-Bayes, enabling all hard ks in multiple sub-learners to be self-adaptively determined based on Bayesian learning. Experiments were conducted on 2 image datasets including 6 metrics, dynamic performance, ablation tests, and compatibility, which distinctly validates the efficacy of our OTCIL frameworks with -kF-Bayes and -kF styles.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in online task-free incremental learning
Mitigates memory loss during long non-i.i.d task streams
Reduces cumulative regrets through adaptive forward regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized neural network with adaptive forward regularization
Ensemble deep random vector functional link network
Plug-and-play Bayesian learning for self-adaptive parameters
🔎 Similar Papers
No similar papers found.