๐ค AI Summary
In FPGA deployments of quantized neural networks, non-matrix-multiplication operations become critical bottlenecks in both performance and resource utilization. To address this, this paper proposes a static precision optimization method tailored for dataflow accelerators. It innovatively introduces scaled integer interval analysis to jointly estimate tensor dynamic ranges, scale factors, and biasesโenabling adaptive accumulator bitwidth allocation, scale/bias aggregation, and fusion of element-wise operations into thresholding operations. Integrated within the FINN framework, the method enables bitwidth customization and operator restructuring specifically for non-matmul layers. Experimental results demonstrate an average reduction of 17% in LUTs, 66% in DSPs, and 22% in accumulator bitwidth, significantly improving resource efficiency and energy efficiency of quantized NN accelerators on FPGAs.
๐ Abstract
While neural network quantization effectively reduces the cost of matrix multiplications, aggressive quantization can expose non-matrix-multiply operations as significant performance and resource bottlenecks on embedded systems. Addressing such bottlenecks requires a comprehensive approach to tailoring the precision across operations in the inference computation. To this end, we introduce scaled-integer range analysis (SIRA), a static analysis technique employing interval arithmetic to determine the range, scale, and bias for tensors in quantized neural networks. We show how this information can be exploited to reduce the resource footprint of FPGA dataflow neural network accelerators via tailored bitwidth adaptation for accumulators and downstream operations, aggregation of scales and biases, and conversion of consecutive elementwise operations to thresholding operations. We integrate SIRA-driven optimizations into the open-source FINN framework, then evaluate their effectiveness across a range of quantized neural network workloads and compare implementation alternatives for non-matrix-multiply operations. We demonstrate an average reduction of 17% for LUTs, 66% for DSPs, and 22% for accumulator bitwidths with SIRA optimizations, providing detailed benchmark analysis and analytical models to guide the implementation style for non-matrix layers. Finally, we open-source SIRA to facilitate community exploration of its benefits across various applications and hardware platforms.