🤖 AI Summary
Conventional artificial neurons employ simplified point-neuron models incapable of emulating the local nonlinear computation and spatial integration capabilities of biological dendrites. Method: This work proposes a biologically inspired dendritic neuron hardware architecture based on multi-gate ferroelectric field-effect transistors (FeFETs), uniquely integrating ferroelectric materials’ intrinsic nonlinearity with multi-gate topology to enable concurrent local computation within dendritic branches and global somatic integration. Leveraging device–circuit–algorithm co-design, the neuron supports compact crossbar-array deployment. Results: Evaluated on benchmarks including MNIST, it achieves ~17× parameter reduction versus dendrite-free networks of comparable scale while outperforming them in accuracy. Experimental validation confirms its high energy efficiency, enhanced learning capacity, and hardware scalability for edge neuromorphic systems—establishing a novel paradigm to overcome fundamental limitations of point-neuron computing.
📝 Abstract
Although inspired by neuronal systems in the brain, artificial neural networks generally employ point-neurons, which offer far less computational complexity than their biological counterparts. Neurons have dendritic arbors that connect to different sets of synapses and offer local non-linear accumulation - playing a pivotal role in processing and learning. Inspired by this, we propose a novel neuron design based on a multi-gate ferroelectric field-effect transistor that mimics dendrites. It leverages ferroelectric nonlinearity for local computations within dendritic branches, while utilizing the transistor action to generate the final neuronal output. The branched architecture paves the way for utilizing smaller crossbar arrays in hardware integration, leading to greater efficiency. Using an experimentally calibrated device-circuit-algorithm co-simulation framework, we demonstrate that networks incorporating our dendritic neurons achieve superior performance in comparison to much larger networks without dendrites ($sim$17$ imes$ fewer trainable weight parameters). These findings suggest that dendritic hardware can significantly improve computational efficiency, and learning capacity of neuromorphic systems optimized for edge applications.