HiCL: Hippocampal-Inspired Continual Learning

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To mitigate catastrophic forgetting in continual learning, this paper proposes a hippocampus-inspired dual-memory architecture. It employs grid-cell-like encoding to generate spatially structured sparse representations and leverages a dentate gyrus (DG)-inspired Top-k gating mechanism to dynamically route inputs to a mixture-of-experts module—eliminating the need for auxiliary gating networks. Complementing this, a CA3-like autoassociative memory stores experience trajectories, while task-similarity-aware elastic weight consolidation and prioritized experience replay enhance retention. Parameter importance is updated online via exponential moving averages, and efficient task matching is achieved using cosine similarity. Evaluated on standard benchmarks, the method substantially reduces cross-task interference, achieves near-state-of-the-art performance with lower computational overhead, and demonstrates strong scalability and neurobiological plausibility.

Technology Category

Application Category

📝 Abstract
We propose HiCL, a novel hippocampal-inspired dual-memory continual learning architecture designed to mitigate catastrophic forgetting by using elements inspired by the hippocampal circuitry. Our system encodes inputs through a grid-cell-like layer, followed by sparse pattern separation using a dentate gyrus-inspired module with top-k sparsity. Episodic memory traces are maintained in a CA3-like autoassociative memory. Task-specific processing is dynamically managed via a DG-gated mixture-of-experts mechanism, wherein inputs are routed to experts based on cosine similarity between their normalized sparse DG representations and learned task-specific DG prototypes computed through online exponential moving averages. This biologically grounded yet mathematically principled gating strategy enables differentiable, scalable task-routing without relying on a separate gating network, and enhances the model's adaptability and efficiency in learning multiple sequential tasks. Cortical outputs are consolidated using Elastic Weight Consolidation weighted by inter-task similarity. Crucially, we incorporate prioritized replay of stored patterns to reinforce essential past experiences. Evaluations on standard continual learning benchmarks demonstrate the effectiveness of our architecture in reducing task interference, achieving near state-of-the-art results in continual learning tasks at lower computational costs.
Problem

Research questions and friction points this paper is trying to address.

Mitigating catastrophic forgetting in continual learning
Enhancing adaptability and efficiency in sequential tasks
Reducing task interference with biologically inspired architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hippocampal-inspired dual-memory architecture mitigates forgetting
DG-gated mixture-of-experts enables differentiable task routing
Prioritized replay reinforces past experiences with EWC consolidation
🔎 Similar Papers
No similar papers found.