A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To mitigate catastrophic forgetting in continual learning, this paper proposes a brain-inspired model—coupled Variational Autoencoder (VAE) and Modern Hopfield Network (MHN)—motivated by the brain’s complementary learning systems. The architecture achieves functional dissociation between pattern completion (dominated by the VAE) and pattern separation (dominated by the MHN) within a scalable framework. The VAE ensures representational generalization and faithful reconstruction of prior knowledge, while the MHN enables robust memory storage and discriminative separation of new and old tasks. Evaluated on Split-MNIST, the model achieves ~90% average accuracy—substantially outperforming conventional continual learning approaches—and exhibits markedly reduced forgetting. Representation analysis confirms effective spatial and behavioral segregation of the two functions, validating the model’s dual-role design. This work establishes a novel paradigm for memory consolidation in artificial systems, balancing biological plausibility with engineering feasibility.

Technology Category

Application Category

📝 Abstract
Learning new information without forgetting prior knowledge is central to human intelligence. In contrast, neural network models suffer from catastrophic forgetting: a significant degradation in performance on previously learned tasks when acquiring new information. The Complementary Learning Systems (CLS) theory offers an explanation for this human ability, proposing that the brain has distinct systems for pattern separation (encoding distinct memories) and pattern completion (retrieving complete memories from partial cues). To capture these complementary functions, we leverage the representational generalization capabilities of variational autoencoders (VAEs) and the robust memory storage properties of Modern Hopfield networks (MHNs), combining them into a neurally plausible continual learning model. We evaluate this model on the Split-MNIST task, a popular continual learning benchmark, and achieve close to state-of-the-art accuracy (~90%), substantially reducing forgetting. Representational analyses empirically confirm the functional dissociation: the VAE underwrites pattern completion, while the MHN drives pattern separation. By capturing pattern separation and completion in scalable architectures, our work provides a functional template for modeling memory consolidation, generalization, and continual learning in both biological and artificial systems.
Problem

Research questions and friction points this paper is trying to address.

Prevent catastrophic forgetting in neural networks during continual learning
Model pattern separation and completion inspired by brain systems
Combine VAEs and Hopfield networks for memory consolidation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines variational autoencoders with Hopfield networks
Models pattern separation and completion functions
Achieves high accuracy in continual learning tasks
🔎 Similar Papers
No similar papers found.