Class Incremental Continual Learning with Self-Organizing Maps and Variational Autoencoders Using Synthetic Replay

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of memory constraints, inability to store raw data or task labels in class-incremental continual learning, this paper proposes a generative framework integrating Self-Organizing Maps (SOM) with Variational Autoencoders (VAE). The method models class distributions in the latent space and maintains only lightweight memory—neuron activation statistics (e.g., covariance)—enabling task-agnostic synthetic replay without access to original samples or explicit task identifiers. Training is inherently visualizable, and the model supports post-training generation. Evaluated on CIFAR-10 and CIFAR-100, it outperforms the current state-of-the-art single-class incremental methods by 9.8% and 6.7%, respectively, matching the performance of external-memory-based approaches while significantly surpassing memory-free baselines. The framework achieves a favorable trade-off among efficiency, generality, and interpretability.

Technology Category

Application Category

📝 Abstract
This work introduces a novel generative continual learning framework based on self-organizing maps (SOMs) and variational autoencoders (VAEs) to enable memory-efficient replay, eliminating the need to store raw data samples or task labels. For high-dimensional input spaces, such as of CIFAR-10 and CIFAR-100, we design a scheme where the SOM operates over the latent space learned by a VAE, whereas, for lower-dimensional inputs, such as those found in MNIST and FashionMNIST, the SOM operates in a standalone fashion. Our method stores a running mean, variance, and covariance for each SOM unit, from which synthetic samples are then generated during future learning iterations. For the VAE-based method, generated samples are then fed through the decoder to then be used in subsequent replay. Experimental results on standard class-incremental benchmarks show that our approach performs competitively with state-of-the-art memory-based methods and outperforms memory-free methods, notably improving over best state-of-the-art single class incremental performance on CIFAR-10 and CIFAR-100 by nearly $10$% and $7$%, respectively. Our methodology further facilitates easy visualization of the learning process and can also be utilized as a generative model post-training. Results show our method's capability as a scalable, task-label-free, and memory-efficient solution for continual learning.
Problem

Research questions and friction points this paper is trying to address.

Develops memory-efficient continual learning without storing raw data
Generates synthetic samples using self-organizing maps and variational autoencoders
Enables class-incremental learning without task labels or data storage
Innovation

Methods, ideas, or system contributions that make the work stand out.

SOM and VAE framework for memory-efficient synthetic replay
Latent space SOM for high-dimensional inputs like CIFAR
Generates samples from stored statistics without raw data
🔎 Similar Papers
No similar papers found.
P
Pujan Thapa
Rochester Institute of Technology, Rochester, NY , USA
A
Alexander Ororbia
Rochester Institute of Technology, Rochester, NY , USA
Travis Desell
Travis Desell
Associate Professor, Rochester Institute of Technology
NeuroevolutionEvolutionary AlgorithmsData ScienceScientific ComputingHigh Performance Computing