SIRA: Scaled-Integer Range Analysis for Optimizing FPGA Dataflow Neural Network Accelerators

๐Ÿ“… 2025-08-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In FPGA deployments of quantized neural networks, non-matrix-multiplication operations become critical bottlenecks in both performance and resource utilization. To address this, this paper proposes a static precision optimization method tailored for dataflow accelerators. It innovatively introduces scaled integer interval analysis to jointly estimate tensor dynamic ranges, scale factors, and biasesโ€”enabling adaptive accumulator bitwidth allocation, scale/bias aggregation, and fusion of element-wise operations into thresholding operations. Integrated within the FINN framework, the method enables bitwidth customization and operator restructuring specifically for non-matmul layers. Experimental results demonstrate an average reduction of 17% in LUTs, 66% in DSPs, and 22% in accumulator bitwidth, significantly improving resource efficiency and energy efficiency of quantized NN accelerators on FPGAs.

Technology Category

Application Category

๐Ÿ“ Abstract
While neural network quantization effectively reduces the cost of matrix multiplications, aggressive quantization can expose non-matrix-multiply operations as significant performance and resource bottlenecks on embedded systems. Addressing such bottlenecks requires a comprehensive approach to tailoring the precision across operations in the inference computation. To this end, we introduce scaled-integer range analysis (SIRA), a static analysis technique employing interval arithmetic to determine the range, scale, and bias for tensors in quantized neural networks. We show how this information can be exploited to reduce the resource footprint of FPGA dataflow neural network accelerators via tailored bitwidth adaptation for accumulators and downstream operations, aggregation of scales and biases, and conversion of consecutive elementwise operations to thresholding operations. We integrate SIRA-driven optimizations into the open-source FINN framework, then evaluate their effectiveness across a range of quantized neural network workloads and compare implementation alternatives for non-matrix-multiply operations. We demonstrate an average reduction of 17% for LUTs, 66% for DSPs, and 22% for accumulator bitwidths with SIRA optimizations, providing detailed benchmark analysis and analytical models to guide the implementation style for non-matrix layers. Finally, we open-source SIRA to facilitate community exploration of its benefits across various applications and hardware platforms.
Problem

Research questions and friction points this paper is trying to address.

Optimizing FPGA accelerators for quantized neural networks
Addressing bottlenecks from non-matrix-multiply operations
Tailoring precision across operations via range analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaled-integer range analysis using interval arithmetic
Tailored bitwidth adaptation for accumulators and operations
Conversion of elementwise operations to thresholding operations
๐Ÿ”Ž Similar Papers
No similar papers found.
Yaman Umuroglu
Yaman Umuroglu
AMD Research & Advanced Development
computer architectureFPGAsirregular applicationsmemory systemsquantized deep learning
C
Christoph Berganski
Paderborn University, Germany
F
Felix Jentzsch
Paderborn University, Germany
M
Michal Danilowicz
AGH University of Krakow, Poland
T
Tomasz Kryjak
AGH University of Krakow, Poland
C
Charalampos Bezaitis
Norwegian University of Science and Technology, Norway
M
Magnus Sjalander
Norwegian University of Science and Technology, Norway
Ian Colbert
Ian Colbert
Advanced Micro Devices (AMD)
Deep LearningQuantization
T
Thomas Preusser
AMD Research, Germany
Jakoba Petri-Koenig
Jakoba Petri-Koenig
AMD
Michaela Blott
Michaela Blott
AMD Research
Machine LearningData CentersFPGAs