π€ AI Summary
This work addresses the limited understanding of catastrophic forgetting in conventional continual learning, which typically focuses only on performance metrics or final-layer representations without probing underlying mechanisms. The paper proposes the first analytically interpretable framework for continual learning, formally characterizing optimal and worst-case forgetting scenarios through the geometric transformations of feature encodings. It reveals a coupled mechanism wherein degradation of representational capacity interacts with perturbations in downstream readout functions. The frameworkβs validity is demonstrated via analytically tractable models, a Crosscoder diagnostic tool, and Vision Transformer architectures on sequential CIFAR-10 tasks, uncovering that increased model depth significantly exacerbates forgetting.
π Abstract
Catastrophic forgetting in continual learning is often measured at the performance or last-layer representation level, overlooking the underlying mechanisms. We introduce a mechanistic framework that offers a geometric interpretation of catastrophic forgetting as the result of transformations to the encoding of individual features. These transformations can lead to forgetting by reducing the allocated capacity of features (worse representation) and disrupting their readout by downstream computations. Analysis of a tractable model formalizes this view, allowing us to identify best- and worst-case scenarios. Through experiments on this model, we empirically test our formal analysis and highlight the detrimental effect of depth. Finally, we demonstrate how our framework can be used in the analysis of practical models through the use of Crosscoders. We present a case study of a Vision Transformer trained on sequential CIFAR-10. Our work provides a new, feature-centric vocabulary for continual learning.