🤖 AI Summary
To address the excessive power consumption of high-precision deep neural networks (DNNs) in digital-only compute-in-memory (CIM) architectures, this work proposes an analog-digital hybrid-domain CIM architecture. The core innovation is the first-of-its-kind co-integration of analog floating-point computation units and digital control logic within a single memory cell, enabling lossless floating-point arithmetic with concurrent optimization of computational accuracy and energy efficiency. Key enablers include area-efficient analog compute units, low-power bit-serial analog-to-digital converters (ADCs), and carefully co-designed hybrid-domain circuits. Circuit-level simulations confirm zero accuracy degradation across mainstream DNN benchmarks, while achieving 3.2×–5.8× higher energy efficiency compared to state-of-the-art digital CIM implementations. This advancement significantly reduces energy consumption for high-precision inference and training at the edge.
📝 Abstract
Compute-in-memory (CIM) has shown significant potential in efficiently accelerating deep neural networks (DNNs) at the edge, particularly in speeding up quantized models for inference applications. Recently, there has been growing interest in developing floating-point-based CIM macros to improve the accuracy of high-precision DNN models, including both inference and training tasks. Yet, current implementations rely primarily on digital methods, leading to substantial power consumption. This paper introduces a hybrid domain CIM architecture that integrates analog and digital CIM within the same memory cell to efficiently accelerate high-precision DNNs. Specifically, we develop area-efficient circuits and energy-efficient analog-to-digital conversion techniques to realize this architecture. Comprehensive circuit-level simulations reveal the notable energy efficiency and lossless accuracy of the proposed design on benchmarks.