Guiding Sparse Neural Networks with Neurobiological Principles to Elicit Biologically Plausible Representations

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited learning efficiency of deep neural networks in generalization, few-shot learning, and continual adaptation compared to biological nervous systems. To bridge this gap, we propose a neurobiologically inspired sparse learning framework that spontaneously develops sparse, biologically plausible neural representations without explicit constraints. The architecture adheres to Dale’s law, employs log-normal weight initialization, and incorporates self-organizing learning rules, naturally integrating multiple neurophysiological properties. This approach enables flexible scaling from feature-level to task-level encoding and reveals underlying mechanisms of neural resource allocation. Empirical results demonstrate substantially improved generalization under few-shot conditions, markedly enhanced adversarial robustness, and the emergence of highly biologically realistic representations.

Technology Category

Application Category

📝 Abstract
While deep neural networks (DNNs) have achieved remarkable performance in tasks such as image recognition, they often struggle with generalization, learning from few examples, and continuous adaptation - abilities inherent in biological neural systems. These challenges arise due to DNNs' failure to emulate the efficient, adaptive learning mechanisms of biological networks. To address these issues, we explore the integration of neurobiologically inspired assumptions in neural network learning. This study introduces a biologically inspired learning rule that naturally integrates neurobiological principles, including sparsity, lognormal weight distributions, and adherence to Dale's law, without requiring explicit enforcement. By aligning with these core neurobiological principles, our model enhances robustness against adversarial attacks and demonstrates superior generalization, particularly in few-shot learning scenarios. Notably, integrating these constraints leads to the emergence of biologically plausible neural representations, underscoring the efficacy of incorporating neurobiological assumptions into neural network design. Preliminary results suggest that this approach could extend from feature-specific to task-specific encoding, potentially offering insights into neural resource allocation for complex tasks.
Problem

Research questions and friction points this paper is trying to address.

generalization
few-shot learning
biological plausibility
neural representation
adversarial robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

neurobiological principles
sparse neural networks
Dale's law
lognormal weight distributions
few-shot learning
🔎 Similar Papers
No similar papers found.
P
Patrick Inoue
KEIM Institute, Albstadt-Sigmaringen University, Germany
Florian Röhrbein
Florian Röhrbein
TUC
A
Andreas Knoblauch
KEIM Institute, Albstadt-Sigmaringen University, Germany