Continual Learning with Synthetic Boundary Experience Blending

πŸ“… 2025-07-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
A primary cause of catastrophic forgetting in continual learning is the sparsity of stored samples in experience replay, which degrades decision boundaries. To address this, we propose Experience Blendingβ€”a novel framework that introduces synthetically generated boundary-proximal data as an implicit regularizer. Specifically, high-quality synthetic samples are batch-generated in a low-dimensional feature space using a multivariate differential privacy noise mechanism, then jointly trained end-to-end with real replayed samples. This approach effectively mitigates boundary simplification and enhances model stability. Empirical evaluation on CIFAR-10, CIFAR-100, and Tiny ImageNet demonstrates absolute accuracy improvements of 10%, 6%, and 13%, respectively, significantly outperforming nine state-of-the-art continual learning methods. Our results validate the critical role of synthetic boundary-aware data in experience replay for mitigating forgetting.

Technology Category

Application Category

πŸ“ Abstract
Continual learning (CL) aims to address catastrophic forgetting in models trained sequentially on multiple tasks. While experience replay has shown promise, its effectiveness is often limited by the sparse distribution of stored key samples, leading to overly simplified decision boundaries. We hypothesize that introducing synthetic data near the decision boundary (Synthetic Boundary Data, or SBD) during training serves as an implicit regularizer, improving boundary stability and mitigating forgetting. To validate this hypothesis, we propose a novel training framework, {f Experience Blending}, which integrates knowledge from both stored key samples and synthetic, boundary-adjacent data. Experience blending consists of two core components: (1) a multivariate Differential Privacy (DP) noise mechanism that injects batch-wise noise into low-dimensional feature representations, generating SBD; and (2) an end-to-end training strategy that jointly leverages both stored key samples and SBD. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet demonstrate that our method outperforms nine CL baselines, achieving accuracy improvements of 10%, 6%, and 13%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in continual learning models
Improve decision boundary stability with synthetic data
Enhance experience replay using boundary-adjacent samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Boundary Data for boundary stability
Multivariate DP noise for feature representation
Experience Blending with stored and synthetic data
πŸ”Ž Similar Papers
No similar papers found.