🤖 AI Summary
To address performance degradation in neural networks caused by memristor device failures, nonlinearities, and process variations, this paper proposes a highly robust, in-situ trainable memristive multilayer neural network architecture. The architecture employs a transistor-free, single-memristor synapse design implemented on a crossbar array, enabling O(1)-complexity parallel matrix-vector multiplication and weight updates—thereby significantly improving energy efficiency and integration density. A customized in-situ training algorithm is developed to mitigate hardware non-idealities. Evaluated on the IRIS, NASA Asteroid, and Breast Cancer Wisconsin datasets, the system achieves classification accuracies of 98.22%, 90.43%, and 98.59%, respectively. LTspice simulations confirm exceptional hardware robustness: accuracy degradation remains below 0.8% under realistic conditions including device failures, strong nonlinearities, and ±10% process variations. This demonstrates both practical viability and superior resilience for memristor-based neuromorphic computing.
📝 Abstract
Neuromorphic architectures, which incorporate parallel and in-memory processing, are crucial for accelerating artificial neural network (ANN) computations. This work presents a novel memristor-based multi-layer neural network (memristive MLNN) architecture and an efficient in-situ training algorithm. The proposed design performs matrix-vector multiplications, outer products, and weight updates in constant time $mathcal{O}(1)$, leveraging the inherent parallelism of memristive crossbars. Each synapse is realized using a single memristor, eliminating the need for transistors, and offering enhanced area and energy efficiency. The architecture is evaluated through LTspice simulations on the IRIS, NASA Asteroid, and Breast Cancer Wisconsin datasets, achieving classification accuracies of 98.22%, 90.43%, and 98.59%, respectively. Robustness is assessed by introducing stuck-at-conducting-state faults in randomly selected memristors. The effects of nonlinearity in memristor conductance and a 10% device variation are also analyzed. The simulation results establish that the network's performance is not affected significantly by faulty memristors, non-linearity, and device variation.