🤖 AI Summary
To address the high DRAM access latency and lack of end-to-end model inference capability in SRAM-based compute-in-memory (CIM) accelerators for AI edge devices, this work proposes the first SRAM-CIM accelerator architecture supporting full-model inference. The design innovatively integrates a CIM-RISC-V heterogeneous computing paradigm with a CIM-optimized instruction set, implemented in TSMC 28nm CMOS technology. It incorporates SRAM-based CIM macros, hardware-pipelined convolution and max-pooling units, weight reuse scheduling, and weight fusion mechanisms. Experimental results on keyword spotting demonstrate an 85.14% reduction in inference latency, achieving 3707.84 TOPS/W energy efficiency and a peak throughput of 26.21 TOPS at 50 MHz. This is the first demonstration of complete end-to-end model inference on an SRAM-CIM accelerator, simultaneously delivering high energy efficiency and strong programmability.
📝 Abstract
Computing-in-memory (CIM) is renowned in deep learning due to its high energy efficiency resulting from highly parallel computing with minimal data movement. However, current SRAM-based CIM designs suffer from long latency for loading weight or feature maps from DRAM for large AI models. Moreover, previous SRAM-based CIM architectures lack end-to-end model inference. To address these issues, this paper proposes CIMR-V, an end-to-end CIM accelerator with RISC-V that incorporates CIM layer fusion, convolution/max pooling pipeline, and weight fusion, resulting in an 85.14% reduction in latency for the keyword spotting model. Furthermore, the proposed CIM-type instructions facilitate end-to-end AI model inference and full stack flow, effectively synergizing the high energy efficiency of CIM and the high programmability of RISC-V. Implemented using TSMC 28nm technology, the proposed design achieves an energy efficiency of 3707.84 TOPS/W and 26.21 TOPS at 50 MHz.