Dream2Learn: Structured Generative Dreaming for Continual Learning

πŸ“… 2026-03-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses catastrophic forgetting in continual learning by proposing a "learning-in-dreams" mechanism inspired by human dreaming. Without relying on replaying real samples, the method autonomously generates semantically novel yet knowledge-consistent structured synthetic data through a frozen diffusion model guided by classifier-driven soft prompt optimization. This approach actively reorganizes and reconstructs the representational space to balance model plasticity and stability, enabling forward-looking self-training and positive forward transfer. Experiments on Mini-ImageNet, FG-ImageNet, and ImageNet-R demonstrate that the proposed method significantly outperforms strong baselines, substantially enhancing the model’s adaptability across sequential tasks.

Technology Category

Application Category

πŸ“ Abstract
Continual learning requires balancing plasticity and stability while mitigating catastrophic forgetting. Inspired by human dreaming as a mechanism for internal simulation and knowledge restructuring, we introduce Dream2Learn (D2L), a framework in which a model autonomously generates structured synthetic experiences from its own internal representations and uses them for self-improvement. Rather than reconstructing past data as in generative replay, D2L enables a classifier to create novel, semantically distinct dreamed classes that are coherent with its learned knowledge yet do not correspond to previously observed data. These dreamed samples are produced by conditioning a frozen diffusion model through soft prompt optimization driven by the classifier itself. The generated data are not used to replace memory, but to expand and reorganize the representation space, effectively allowing the network to self-train on internally synthesized concepts. By integrating dreamed classes into continual training, D2L proactively structures latent features to support forward knowledge transfer and adaptation to future tasks. This prospective self-training mechanism mirrors the role of sleep in consolidating and reorganizing memory, turning internal simulations into a tool for improved generalization. Experiments on Mini-ImageNet, FG-ImageNet, and ImageNet-R demonstrate that D2L consistently outperforms strong rehearsal-based baselines and achieves positive forward transfer, confirming its ability to enhance adaptability through internally generated training signals.
Problem

Research questions and friction points this paper is trying to address.

continual learning
catastrophic forgetting
forward transfer
plasticity-stability balance
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured generative dreaming
continual learning
forward transfer
diffusion model
self-training
πŸ”Ž Similar Papers
No similar papers found.
S
Salvatore Calcagno
PeRCeiVe Lab, University of Catania, Italy
M
Matteo Pennisi
PeRCeiVe Lab, University of Catania, Italy
F
Federica Proietto Salanitri
PeRCeiVe Lab, University of Catania, Italy
A
Amelia Sorrenti
PeRCeiVe Lab, University of Catania, Italy
Simone Palazzo
Simone Palazzo
University of Catania
Concetto Spampinato
Concetto Spampinato
University of Catania
Deep LearningArtificial IntelligenceComputer VisionMedical Image Analysis
G
Giovanni Bellitto
PeRCeiVe Lab, University of Catania, Italy