🤖 AI Summary
Memristor-based reservoir computing (RC) hardware suffers from limited accuracy and energy efficiency in image recognition tasks. To address this, this work systematically evaluates various input preprocessing techniques and proposes a lightweight pixel-parity mapping strategy. The method introduces minimal hardware overhead while improving recognition accuracy by 2–6% on standard image datasets and reducing energy consumption per unit accuracy by 18%. Crucially, this study provides the first quantitative characterization of how preprocessing critically influences both the energy efficiency and scalability of memristor RC systems—overcoming fundamental performance bottlenecks inherent to conventional approaches such as grayscale normalization and binarization. By establishing a hardware-friendly paradigm, this work advances low-power, deployable neuromorphic vision computing.
📝 Abstract
Reservoir computing (RC) has attracted attention as an efficient recurrent neural network architecture due to its simplified training, requiring only its last perceptron readout layer to be trained. When implemented with memristors, RC systems benefit from their dynamic properties, which make them ideal for reservoir construction. However, achieving high performance in memristor-based RC remains challenging, as it critically depends on the input preprocessing method and reservoir size. Despite growing interest, a comprehensive evaluation that quantifies the impact of these factors is still lacking. This paper systematically compares various preprocessing methods for memristive RC systems, assessing their effects on accuracy and energy consumption. We also propose a parity-based preprocessing method that improves accuracy by 2-6% while requiring only a modest increase in device count compared to other methods. Our findings highlight the importance of informed preprocessing strategies to improve the efficiency and scalability of memristive RC systems.