PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis

πŸ“… 2024-05-23
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses representation collapse and poor performance in online/continual learning within non-contrastive self-supervised learning. Inspired by the hippocampal temporal prediction hypothesis, we propose PhiNetβ€”the first framework to incorporate a hippocampo-neocortical complementary learning system into non-contrastive learning. PhiNet employs a momentum encoder to emulate slow neocortical learning and a dual-branch predictor to model dynamic CA1-like temporal prediction; it further introduces a novel raw-representation prediction mechanism to enhance representation stability. We validate its biological plausibility through dynamical systems analysis. Experiments demonstrate that PhiNet significantly improves robustness to weight decay and outperforms SimSiam in both online and continual learning settings, effectively mitigating representation collapse.

Technology Category

Application Category

πŸ“ Abstract
SimSiam is a prominent self-supervised learning method that achieves impressive results in various vision tasks under static environments. However, it has two critical issues: high sensitivity to hyperparameters, especially weight decay, and unsatisfactory performance in online and continual learning, where neuroscientists believe that powerful memory functions are necessary, as in brains. In this paper, we propose PhiNet, inspired by a hippocampal model based on the temporal prediction hypothesis. Unlike SimSiam, which aligns two augmented views of the original image, PhiNet integrates an additional predictor block that estimates the original image representation to imitate the CA1 region in the hippocampus. Moreover, we model the neocortex inspired by the Complementary Learning Systems theory with a momentum encoder block as a slow learner, which works as long-term memory. We demonstrate through analysing the learning dynamics that PhiNet benefits from the additional predictor to prevent the complete collapse of learned representations, a notorious challenge in non-contrastive learning. This dynamics analysis may partially corroborate why this hippocampal model is biologically plausible. Experimental results demonstrate that PhiNet is more robust to weight decay and performs better than SimSiam in memory-intensive tasks like online and continual learning.
Problem

Research questions and friction points this paper is trying to address.

Explores temporal prediction hypothesis in self-supervised learning
Extends SimSiam to model hippocampal CA3-CA1 predictors
Evaluates robustness and adaptivity of PhiNet in learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

PhiNet extends SimSiam with dual predictors
Non-contrastive learning mimics synaptic delay
X-PhiNet integrates momentum for continual learning
πŸ”Ž Similar Papers
No similar papers found.