🤖 AI Summary
Deploying microsecond-latency neural networks on FPGAs faces a severe area bottleneck due to constant matrix–vector multiplication (CMVM) operations.
Method: This paper proposes an efficient constant matrix–vector multiplication optimization based on distributed arithmetic (DA), integrating full unrolling with deep pipelining to achieve single-cycle throughput while minimizing hardware resource consumption. The designed DA algorithm jointly optimizes computational efficiency and area utilization.
Contribution/Results: Integrated into the hls4ml open-source framework, the method enables end-to-end deployment of highly quantized real-world networks. It reduces on-chip FPGA resource usage by up to 33% and decreases inference latency, thereby enabling deployment of ultra-low-latency models previously infeasible due to resource constraints. This advancement is particularly valuable for latency-critical applications such as high-energy physics experiments.
📝 Abstract
Neural networks with a latency requirement on the order of microseconds, like the ones used at the CERN Large Hadron Collider, are typically deployed on FPGAs fully unrolled and pipelined. A bottleneck for the deployment of such neural networks is area utilization, which is directly related to the required constant matrix-vector multiplication (CMVM) operations. In this work, we propose an efficient algorithm for implementing CMVM operations with distributed arithmetic (DA) on FPGAs that simultaneously optimizes for area consumption and latency. The algorithm achieves resource reduction similar to state-of-the-art algorithms while being significantly faster to compute. The proposed algorithm is open-sourced and integrated into the exttt{hls4ml} library, a free and open-source library for running real-time neural network inference on FPGAs. We show that the proposed algorithm can reduce on-chip resources by up to a third for realistic, highly quantized neural networks while simultaneously reducing latency, enabling the implementation of previously infeasible networks.