🤖 AI Summary
Biological neural networks offer advantages in robustness, energy efficiency, and physiological interpretability, yet bio-inspired models often lag behind backpropagation (BP)-based counterparts in accuracy and scalability. To address this, we propose an inter-layer heterogeneous biologically plausible learning rule hybridization mechanism—enabling adaptive selection of diverse neurobiological rules (e.g., STDP, Hippo) per layer—and introduce a dedicated neural architecture search (NAS) framework co-optimizing both network topology and rule assignment. This approach breaks the constraint of uniform learning rules across layers. Empirical evaluation demonstrates state-of-the-art performance among biologically inspired models on CIFAR-10, CIFAR-100, and ImageNet; notably, certain configurations surpass comparably sized BP models in accuracy while preserving inherent robustness and ultra-low energy consumption.
📝 Abstract
Bio-inspired neural networks are attractive for their adversarial robustness, energy frugality, and closer alignment with cortical physiology, yet they often lag behind back-propagation (BP) based models in accuracy and ability to scale. We show that allowing the use of different bio-inspired learning rules in different layers, discovered automatically by a tailored neural-architecture-search (NAS) procedure, bridges this gap. Starting from standard NAS baselines, we enlarge the search space to include bio-inspired learning rules and use NAS to find the best architecture and learning rule to use in each layer. We show that neural networks that use different bio-inspired learning rules for different layers have better accuracy than those that use a single rule across all the layers. The resulting NN that uses a mix of bio-inspired learning rules sets new records for bio-inspired models: 95.16% on CIFAR-10, 76.48% on CIFAR-100, 43.42% on ImageNet16-120, and 60.51% top-1 on ImageNet. In some regimes, they even surpass comparable BP-based networks while retaining their robustness advantages. Our results suggest that layer-wise diversity in learning rules allows better scalability and accuracy, and motivates further research on mixing multiple bio-inspired learning rules in the same network.