Rethinking Hebbian Principle: Low-Dimensional Structural Projection for Unsupervised Learning

πŸ“… 2025-10-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional Hebbian learning suffers from unbounded weight updates and absence of feedback regulation, hindering its effective scaling to deep networks. This paper proposes SPHeReβ€”a novel unsupervised synaptic plasticity method integrating orthogonality constraints with structural information preservation. SPHeRe employs local nonlinear modules for biologically plausible localized computation, introduces a lightweight feedback projection mechanism for error-guided weight adaptation, and enforces structural projections to ensure representational orthogonality and stability. To our knowledge, SPHeRe is the first unsupervised deep learning framework to jointly model feedforward locality, feedback modulation, and synaptic orthogonality. It achieves state-of-the-art performance among unsupervised synaptic plasticity methods on CIFAR-10, CIFAR-100, and Tiny-ImageNet. Moreover, SPHeRe demonstrates superior generalization across continual learning, transfer learning, and image reconstruction tasks.

Technology Category

Application Category

πŸ“ Abstract
Hebbian learning is a biological principle that intuitively describes how neurons adapt their connections through repeated stimuli. However, when applied to machine learning, it suffers serious issues due to the unconstrained updates of the connections and the lack of accounting for feedback mediation. Such shortcomings limit its effective scaling to complex network architectures and tasks. To this end, here we introduce the Structural Projection Hebbian Representation (SPHeRe), a novel unsupervised learning method that integrates orthogonality and structural information preservation through a local auxiliary nonlinear block. The loss for structural information preservation backpropagates to the input through an auxiliary lightweight projection that conceptually serves as feedback mediation while the orthogonality constraints account for the boundedness of updating magnitude. Extensive experimental results show that SPHeRe achieves SOTA performance among unsupervised synaptic plasticity approaches on standard image classification benchmarks, including CIFAR-10, CIFAR-100, and Tiny-ImageNet. Furthermore, the method exhibits strong effectiveness in continual learning and transfer learning scenarios, and image reconstruction tasks show the robustness and generalizability of the extracted features. This work demonstrates the competitiveness and potential of Hebbian unsupervised learning rules within modern deep learning frameworks, demonstrating the possibility of efficient and biologically inspired learning algorithms without the strong dependence on strict backpropagation. Our code is available at https://github.com/brain-intelligence-lab/SPHeRe.
Problem

Research questions and friction points this paper is trying to address.

Addresses unconstrained connection updates in Hebbian learning
Solves lack of feedback mediation in biological learning principles
Enables scaling Hebbian learning to complex network architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates orthogonality and structural preservation via local auxiliary block
Uses lightweight projection for feedback mediation in learning
Applies bounded updates through orthogonality constraints on connections
πŸ”Ž Similar Papers
No similar papers found.