Efficient and Fault-Tolerant Memristive Neural Networks with In-Situ Training

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in neural networks caused by memristor device failures, nonlinearities, and process variations, this paper proposes a highly robust, in-situ trainable memristive multilayer neural network architecture. The architecture employs a transistor-free, single-memristor synapse design implemented on a crossbar array, enabling O(1)-complexity parallel matrix-vector multiplication and weight updates—thereby significantly improving energy efficiency and integration density. A customized in-situ training algorithm is developed to mitigate hardware non-idealities. Evaluated on the IRIS, NASA Asteroid, and Breast Cancer Wisconsin datasets, the system achieves classification accuracies of 98.22%, 90.43%, and 98.59%, respectively. LTspice simulations confirm exceptional hardware robustness: accuracy degradation remains below 0.8% under realistic conditions including device failures, strong nonlinearities, and ±10% process variations. This demonstrates both practical viability and superior resilience for memristor-based neuromorphic computing.

Technology Category

Application Category

📝 Abstract
Neuromorphic architectures, which incorporate parallel and in-memory processing, are crucial for accelerating artificial neural network (ANN) computations. This work presents a novel memristor-based multi-layer neural network (memristive MLNN) architecture and an efficient in-situ training algorithm. The proposed design performs matrix-vector multiplications, outer products, and weight updates in constant time $mathcal{O}(1)$, leveraging the inherent parallelism of memristive crossbars. Each synapse is realized using a single memristor, eliminating the need for transistors, and offering enhanced area and energy efficiency. The architecture is evaluated through LTspice simulations on the IRIS, NASA Asteroid, and Breast Cancer Wisconsin datasets, achieving classification accuracies of 98.22%, 90.43%, and 98.59%, respectively. Robustness is assessed by introducing stuck-at-conducting-state faults in randomly selected memristors. The effects of nonlinearity in memristor conductance and a 10% device variation are also analyzed. The simulation results establish that the network's performance is not affected significantly by faulty memristors, non-linearity, and device variation.
Problem

Research questions and friction points this paper is trying to address.

Designing efficient memristor-based neural networks
Developing in-situ training for fault tolerance
Evaluating robustness against device variations and faults
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memristor-based multi-layer neural network architecture
Efficient in-situ training algorithm
Single memristor synapses for efficiency
🔎 Similar Papers
No similar papers found.
S
Santlal Prajapat
School of Engineering, Sister Nivedita University, DG 1/2 New Town, Action Area 1, Kolkata - 700156
M
Manobendra Nath Mondal
School of Computer Science, UPES, Dehradun, India
Susmita Sur-Kolay
Susmita Sur-Kolay
Professor of Computer Science
Quantum computingElectronic design automationAlgorithms