Abstraction as a Memory-Efficient Inductive Bias for Continual Learning

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting in online continual learning, a challenge exacerbated by interference from new knowledge and the high memory overhead of replay buffers commonly used in existing methods. To overcome these limitations, the authors propose Abstract-Augmented Training (AAT), which introduces structural abstraction as an inductive bias that requires no additional memory and entirely eliminates the need for replay mechanisms. AAT jointly optimizes concrete instances and their abstract representations—such as entity masks or proverbial distillations—within the loss function to encourage the model to capture latent relational structures shared across samples. Experiments demonstrate that, without any extra memory consumption, AAT matches or even surpasses strong experience replay baselines on both relational and narrative datasets, validating abstraction as an effective strategy for efficient continual learning.

Technology Category

Application Category

📝 Abstract
The real world is non-stationary and infinitely complex, requiring intelligent agents to learn continually without the prohibitive cost of retraining from scratch. While online continual learning offers a framework for this setting, learning new information often interferes with previously acquired knowledge, causes forgetting and degraded generalization. To address this, we propose Abstraction-Augmented Training (AAT), a loss-level modification encouraging models to capture the latent relational structure shared across examples. By jointly optimizing over concrete instances and their abstract representations, AAT introduces a memory-efficient inductive bias that stabilizes learning in strictly online data streams, eliminating the need for a replay buffer. To capture the multi-faceted nature of abstraction, we introduce and evaluate AAT on two benchmarks: a controlled relational dataset where abstraction is realized through entity masking, and a narrative dataset where abstraction is expressed through shared proverbs. Our results show that AAT achieves performance comparable to or exceeding strong experience replay (ER) baselines, despite requiring zero additional memory and only minimal changes to the training objective. This work highlights structural abstraction as a powerful, memory-free alternative to ER.
Problem

Research questions and friction points this paper is trying to address.

continual learning
catastrophic forgetting
online learning
memory efficiency
inductive bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

abstraction
continual learning
inductive bias
memory-efficient
online learning
🔎 Similar Papers
No similar papers found.
E
Elnaz Rahmati
University of Southern California
N
Nona Ghazizadeh
University of Southern California
Zhivar Sourati
Zhivar Sourati
Graduate Research Assistant, University of Southern California
Natural Language ProcessingCognitive PsychologyReasoningSocial Network Analysis
N
Nina Rouhani
University of Southern California
M
Morteza Dehghani
University of Southern California