🤖 AI Summary
This work addresses the challenge that activation values in deep neural networks tend to accumulate near distribution boundaries, which introduces bias in existing nonlinear quantization methods, degrades performance, and increases reliance on high-resolution analog-to-digital converters (ADCs). To mitigate this, the paper proposes Boundary-Suppressed K-Means Quantization (BS-KMQ), which, for the first time, integrates a boundary outlier suppression mechanism into nonlinear quantization. By equalizing the activation distribution prior to clustering, BS-KMQ generates superior quantization levels. Combined with a reconfigurable in-memory nonlinear ADC, post-training quantization, and low-bit fine-tuning, the method reduces quantization error by at least 3× and improves accuracy by up to 67.7% across multiple models. System-level simulations demonstrate up to 4× speedup and 24× energy efficiency gains over baseline approaches, achieving significant co-optimization of area, energy efficiency, and accuracy.
📝 Abstract
In deep networks, operations such as ReLU and hardware-driven clamping often cause activations to accumulate near the edges of the distribution, leading to biased clustering and suboptimal quantization in existing nonlinear (NL) quantization methods. This paper introduces Boundary Suppressed K-Means Quantization (BS-KMQ), a novel NL quantization approach designed to reduce the resolution requirements of analog-to-digital converters (ADCs) in in-memory computing (IMC) systems. By suppressing boundary outliers before clustering, BS-KMQ achieves more balanced and informative NL quantization levels. The resulting NL references are implemented using a reconfigurable in-memory NL-ADC, achieving a 7x area improvement over prior NL-ADC designs. When evaluated on ResNet-18, VGG-16, Inception-V3, and DistilBERT, BS-KMQ achieves at least 3x lower quantization error compared to linear, Lloyd-Max, cumulative distribution function (CDF), and K-means methods. It also improves post-training quantization accuracy by up to 66.8%, 25.4%, 66.6%, and 67.7%, respectively, compared to linear quantization. After low-bit fine-tuning, BS-KMQ maintains competitive accuracy with significantly fewer NL-ADC levels (3/3/4/4b). System-level simulations on ResNet-18 (6/2/3b) demonstrate up to a 4x speedup and 24x energy efficiency improvement over existing IMC accelerators.