🤖 AI Summary
To address catastrophic forgetting in shallow neural networks (e.g., logistic regression) under continual learning—exacerbated by their limited memory capacity—this paper proposes a Hessian-matching-based compact memory construction framework. Methodologically, it employs probabilistic PCA to efficiently estimate parameter-space curvature and integrates a Hessian-guided memory sample selection strategy to achieve high-fidelity compression of critical historical knowledge. The approach synergizes with Experience Replay to jointly optimize memory efficiency. On Split-ImageNet, it achieves 60% accuracy using only 0.3% of the original dataset—substantially outperforming conventional replay (30%); with 2% memory budget, accuracy reaches 74%, approaching the batch-learning upper bound (77.6%). This work establishes a scalable, theory-driven paradigm for memory compression in resource-constrained continual learning.
📝 Abstract
Despite recent progress, continual learning still does not match the performance of batch training. To avoid catastrophic forgetting, we need to build compact memory of essential past knowledge, but no clear solution has yet emerged, even for shallow neural networks with just one or two layers. In this paper, we present a new method to build compact memory for logistic regression. Our method is based on a result by Khan and Swaroop [2021] who show the existence of optimal memory for such models. We formulate the search for the optimal memory as Hessian-matching and propose a probabilistic PCA method to estimate them. Our approach can drastically improve accuracy compared to Experience Replay. For instance, on Split-ImageNet, we get 60% accuracy compared to 30% obtained by replay with memory-size equivalent to 0.3% of the data size. Increasing the memory size to 2% further boosts the accuracy to 74%, closing the gap to the batch accuracy of 77.6% on this task. Our work opens a new direction for building compact memory that can also be useful in the future for continual deep learning.