TrIM, Triangular Input Movement Systolic Array for Convolutional Neural Networks: Architecture and Hardware Implementation

📅 2024-08-05
🏛️ IEEE Transactions on Circuits and Systems Part 1: Regular Papers
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the critical bottlenecks of high data movement overhead and low energy efficiency in CNN accelerators, this paper proposes the Triangular Input Movement (TrIM) dataflow—the first to schedule input feature maps along a triangular trajectory—significantly reducing off-chip memory accesses. Complementing this, we design a customized systolic array architecture and implement VGG-16 and AlexNet inference on FPGA. Compared to state-of-the-art row-stationary arrays, TrIM reduces memory traffic by an order of magnitude (~3×), achieves a peak computational throughput of 453.6 GOPS, and improves energy efficiency by 11.9× over comparable FPGA-based accelerators. The design simultaneously delivers high throughput and ultra-low power consumption, establishing a novel paradigm for memory-compute co-optimization in deep learning hardware.

Technology Category

Application Category

📝 Abstract
Modern hardware architectures for Convolutional Neural Networks (CNNs), other than targeting high performance, aim at dissipating limited energy. Reducing the data movement cost between the computing cores and the memory is a way to mitigate the energy consumption. Systolic arrays are suitable architectures to achieve this objective: they use multiple processing elements that communicate each other to maximize data utilization, based on proper dataflows like the weight stationary and row stationary. Motivated by this, we have proposed TrIM, an innovative dataflow based on a triangular movement of inputs, and capable to reduce the number of memory accesses by one order of magnitude when compared to state-of-the-art systolic arrays. In this paper, we present a TrIM-based hardware architecture for CNNs. As a showcase, the accelerator is implemented onto a Field Programmable Gate Array (FPGA) to execute the VGG-16 and AlexNet CNNs. The architecture achieves a peak throughput of 453.6 Giga Operations per Second, outperforming a state-of-the-art row stationary systolic array up to ~3x in terms of memory accesses, and being up to ~11.9x more energy-efficient than other FPGA accelerators.
Problem

Research questions and friction points this paper is trying to address.

High-performance Hardware Architecture
Convolutional Neural Networks (CNN)
Energy Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

TrIM Design
CNN Hardware Optimization
FPGA Acceleration
🔎 Similar Papers
2024-08-02IEEE Transactions on Circuits and Systems for Artificial IntelligenceCitations: 2
C
Cristian Sestito
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
S
Shady O. Agwa
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
T
T. Prodromakis
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom