Degradation of Feature Space in Continual Learning

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates whether enforcing isotropy in the feature space mitigates catastrophic forgetting and improves representation quality in continual learning. By comparing the geometric structure of feature spaces under centralized versus continual learning settings, the work reveals fundamental differences between the two and challenges the applicability of isotropy as a universal inductive bias. Experimental results on CIFAR-10 and CIFAR-100 demonstrate that isotropic regularization not only fails to enhance performance but actually degrades accuracy, suggesting that isotropy is ill-suited as an inductive bias in non-stationary learning scenarios. These findings offer a new perspective on the geometry of representations in continual learning and question the uncritical adoption of isotropy-promoting objectives from static learning contexts.

Technology Category

Application Category

πŸ“ Abstract
Centralized training is the standard paradigm in deep learning, enabling models to learn from a unified dataset in a single location. In such setup, isotropic feature distributions naturally arise as a mean to support well-structured and generalizable representations. In contrast, continual learning operates on streaming and non-stationary data, and trains models incrementally, inherently facing the well-known plasticity-stability dilemma. In such settings, learning dynamics tends to yield increasingly anisotropic feature space. This arises a fundamental question: should isotropy be enforced to achieve a better balance between stability and plasticity, and thereby mitigate catastrophic forgetting? In this paper, we investigate whether promoting feature-space isotropy can enhance representation quality in continual learning. Through experiments using contrastive continual learning techniques on CIFAR-10 and CIFAR-100 data, we find that isotropic regularization fails to improve, and can in fact degrade, model accuracy in continual settings. Our results highlight essential differences in feature geometry between centralized and continual learning, suggesting that isotropy, while beneficial in centralized setups, may not constitute an appropriate inductive bias for non-stationary learning scenarios.
Problem

Research questions and friction points this paper is trying to address.

continual learning
feature space isotropy
catastrophic forgetting
plasticity-stability dilemma
non-stationary data
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
feature isotropy
catastrophic forgetting
representation geometry
contrastive learning
πŸ”Ž Similar Papers
No similar papers found.
C
Chiara Lanza
CTTC, Sustainable Artificial Intelligence RU, Castelldefels, Spain
R
Roberto Pereira
CTTC, Sustainable Artificial Intelligence RU, Castelldefels, Spain
Marco Miozzo
Marco Miozzo
Senior Researcher @ Centre TecnolΓ²gic de Telecomunicacons de Catalunya (CTTC)
Machine LearningMobile NetworksEdge IntelligenceTransparent and Explainable AIEnergy Ethical
E
Eduard Angelats
CTTC, Geomatics RU, Castelldefels, Spain
Paolo Dini
Paolo Dini
CTTC/CERCA | Sustainable AI research unit
information engineeringmachine learningmulti-agent systemssustainable computing