🤖 AI Summary
To address the critical bottlenecks of high data movement overhead and low energy efficiency in CNN accelerators, this paper proposes the Triangular Input Movement (TrIM) dataflow—the first to schedule input feature maps along a triangular trajectory—significantly reducing off-chip memory accesses. Complementing this, we design a customized systolic array architecture and implement VGG-16 and AlexNet inference on FPGA. Compared to state-of-the-art row-stationary arrays, TrIM reduces memory traffic by an order of magnitude (~3×), achieves a peak computational throughput of 453.6 GOPS, and improves energy efficiency by 11.9× over comparable FPGA-based accelerators. The design simultaneously delivers high throughput and ultra-low power consumption, establishing a novel paradigm for memory-compute co-optimization in deep learning hardware.
📝 Abstract
Modern hardware architectures for Convolutional Neural Networks (CNNs), other than targeting high performance, aim at dissipating limited energy. Reducing the data movement cost between the computing cores and the memory is a way to mitigate the energy consumption. Systolic arrays are suitable architectures to achieve this objective: they use multiple processing elements that communicate each other to maximize data utilization, based on proper dataflows like the weight stationary and row stationary. Motivated by this, we have proposed TrIM, an innovative dataflow based on a triangular movement of inputs, and capable to reduce the number of memory accesses by one order of magnitude when compared to state-of-the-art systolic arrays. In this paper, we present a TrIM-based hardware architecture for CNNs. As a showcase, the accelerator is implemented onto a Field Programmable Gate Array (FPGA) to execute the VGG-16 and AlexNet CNNs. The architecture achieves a peak throughput of 453.6 Giga Operations per Second, outperforming a state-of-the-art row stationary systolic array up to ~3x in terms of memory accesses, and being up to ~11.9x more energy-efficient than other FPGA accelerators.