🤖 AI Summary
Conventional machine learning models suffer from high energy consumption, hindering deployment in edge computing. Method: This paper proposes a brain-inspired oscillatory network hardware architecture based on a CMOS ring oscillator array for low-power image classification. It employs a sparse Hopfield network, introduces a novel forward-only training algorithm, and integrates SHIL (Sparse Hardware-Inspired Learning) circuits with sparse weight mapping to drastically reduce interconnect redundancy at the hardware level. Contribution/Results: On MNIST, the architecture achieves 98.7% accuracy—8% higher than conventional deep models—while using only 24% of the connections required by a fully connected Hopfield network. It reduces connectivity by 76% with merely a 0.1% accuracy drop, yielding substantial energy efficiency gains. Fabricated using standard CMOS technology, the design is manufacturable and scalable. This work overcomes the energy-efficiency bottleneck of fully connected architectures, establishing a practical hardware-algorithm co-design paradigm for low-power neuromorphic computing at the edge.
📝 Abstract
Machine learning has achieved remarkable advancements but at the cost of significant computational resources. This has created an urgent need for a novel and energy-efficient computational fabric. CMOS Oscillator Networks (OscNet) is a brain inspired and specially designed hardware for low energy consumption. In this paper, we propose a Hopfield Network based machine learning algorithm that can be implemented on OscNet. The network is trained using forward propagation alone to learn sparsely connected weights, yet achieves an 8% improvement in accuracy compared to conventional deep learning models on MNIST dataset. OscNet v1.5 achieves competitive accuracy on MNIST and is well-suited for implementation using CMOS-compatible ring oscillator arrays with SHIL. In oscillator-based implementation, we utilize only 24% of the connections used in a fully connected Hopfield network, with merely a 0.1% drop in accuracy. OscNet v1.5 relies solely on forward propagation and employs sparse connections, making it an energy-efficient machine learning pipeline designed for CMOS oscillator computing. The repository for OscNet family is: https://github.com/RussRobin/OscNet.