Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks

📅 2024-06-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited unsupervised representation learning capability of brain-inspired spiking neural networks, this paper proposes a neocortex-inspired feedforward Bayesian Confidence Propagation Neural Network (BCPNN) model. Methodologically, it is the first to integrally incorporate structural synaptic plasticity and divisive normalization—alongside Hebbian learning, sparse coding, modular cortical column architecture, and patchy connectivity—into the BCPNN framework, enabling hardware-efficient, scalable unsupervised learning. The key innovation lies in unifying functional and structural plasticity within a single computational model and introducing divisive normalization to enhance representation robustness. Evaluated on diverse benchmarks—including MNIST, CIFAR-10, SVHN, MUV, HIV, and EMBER—our model achieves performance competitive with fully supervised multilayer perceptrons and state-of-the-art brain-inspired networks, using only a linear classifier for downstream tasks.

Technology Category

Application Category

📝 Abstract
Neural networks that can capture key principles underlying brain computation offer exciting new opportunities for developing artificial intelligence and brain-like computing algorithms. Such networks remain biologically plausible while leveraging localized forms of synaptic learning rules and modular network architecture found in the neocortex. Compared to backprop-driven deep learning approches, they provide more suitable models for deployment of neuromorphic hardware and have greater potential for scalability on large-scale computing clusters. The development of such brain-like neural networks depends on having a learning procedure that can build effective internal representations from data. In this work, we introduce and evaluate a brain-like neural network model capable of unsupervised representation learning. It builds on the Bayesian Confidence Propagation Neural Network (BCPNN), which has earlier been implemented as abstract as well as biophyscially detailed recurrent attractor neural networks explaining various cortical associative memory phenomena. Here we developed a feedforward BCPNN model to perform representation learning by incorporating a range of brain-like attributes derived from neocortical circuits such as cortical columns, divisive normalization, Hebbian synaptic plasticity, structural plasticity, sparse activity, and sparse patchy connectivity. The model was tested on a diverse set of popular machine learning benchmarks: grayscale images (MNIST, F-MNIST), RGB natural images (SVHN, CIFAR-10), QSAR (MUV, HIV), and malware detection (EMBER). The performance of the model when using a linear classifier to predict the class labels fared competitively with conventional multi-layer perceptrons and other state-of-the-art brain-like neural networks.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised Learning
Neuromorphic Networks
Information Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

BCPNN neural network
unsupervised learning
cortical features
🔎 Similar Papers
No similar papers found.
N
Naresh B. Ravichandran
Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology and Swedish e-Science Research Centre, Stockholm, Sweden
A
A. Lansner
Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology and Swedish e-Science Research Centre, Stockholm, Sweden; Department of Mathematics, Stockholm University, Stockholm, Sweden
P
P. Herman
Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology and Swedish e-Science Research Centre, Stockholm, Sweden; Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden