🤖 AI Summary
To address the stringent memory and power constraints of miniature imaging systems, this work proposes a lightweight ternary-binary neural network (TBN) hardware accelerator tailored for dynamic vision sensors (DVS). Methodologically, it integrates a query-driven spatial DVS sensor with a reconfigurable mode-switching mechanism to compress input data via pixel sharing, and employs a dedicated CNN accelerator supporting ternary DVS event encoding and binary-weight inference. Implemented in 28-nm CMOS, the design achieves an 81% reduction in data volume, a 27% decrease in MAC operations, 440-ms inference latency, and only 1.6 mW power consumption—yielding a 7.3× improvement in energy-efficiency figure-of-merit. The core contribution lies in a co-optimized DVS-CNN architecture and ultra-low-bit neural network hardware mapping, effectively breaking the traditional trade-offs among area, energy, and real-time performance in miniature vision systems.
📝 Abstract
Miniature imaging systems are essential for space-constrained applications but are limited by memory and power constraints. While machine learning can reduce data size by extracting key features, its high energy demands often exceed the capacity of small batteries. This paper presents a CNN hardware accelerator optimized for object classification in miniature imaging systems. It processes data from a spatial Dynamic Vision Sensor (DVS), reconfigurable to a temporal DVS via pixel sharing, minimizing sensor area. By using ternary DVS outputs and a ternary-input, binary-weight neural network, the design reduces computation and memory needs. Fabricated in 28 nm CMOS, the accelerator cuts data size by 81% and MAC operations by 27%. It achieves 440 ms inference time at just 1.6 mW power consumption, improving the Figure-of-Merit (FoM) by 7.3x over prior CNN accelerators for miniature systems.