🤖 AI Summary
Can approximate multipliers be effectively deployed in extremely low-bit (down to 2-bit) quantized deep neural networks (DNNs)?
Method: We present the first systematic validation and hardware implementation of approximate multipliers in ultra-low-bit, mixed-precision DNNs. We propose a lightweight, scalable hardware-algorithm co-optimization framework comprising quantization-aware design, approximate multiplier architecture search, and joint accuracy–energy-efficiency optimization.
Contribution/Results: Our approach overcomes the longstanding limitation that approximate computing is only viable at higher bit-widths. Evaluated on state-of-the-art mixed-precision models, it achieves an average energy reduction of 28.67%, up to 300× inference speedup, and ≤1% accuracy degradation—significantly outperforming existing methods such as genetic algorithm–based approaches. The framework enables practical deployment of approximate arithmetic in extreme quantization regimes while preserving model fidelity and delivering substantial hardware efficiency gains.
📝 Abstract
A widely-used technique in designing energy-efficient deep neural network (DNN) accelerators is quantization. Recent progress in this direction has reduced the bitwidths used in DNN down to 2. Meanwhile, many prior works apply approximate multipliers (AppMuls) in designing DNN accelerators to lower their energy consumption. Unfortunately, these works still assume a bitwidth much larger than 2, which falls far behind the state-of-the-art in quantization area and even challenges the meaningfulness of applying AppMuls in DNN accelerators, since a high-bitwidth AppMul consumes much more energy than a low-bitwidth exact multiplier! Thus, an important problem to study is: Can approximate multipliers be effectively applied to quantized DNN models with very low bitwidths? In this work, we give an affirmative answer to this question and present a systematic solution that achieves the answer: FAMES, a fast approximate multiplier substitution method for mixed-precision DNNs. Our experiments demonstrate an average 28.67% energy reduction on state-of-the-art mixed-precision quantized models with bitwidths as low as 2 bits and accuracy losses kept under 1%. Additionally, our approach is up to 300x faster than previous genetic algorithm-based methods.