🤖 AI Summary
This work addresses the challenge of sustaining high computational utilization in general-purpose DNN accelerators under diverse AI workloads, which limits energy efficiency. To overcome this, the authors propose Voltra, a novel chip featuring a three-dimensional spatial dataflow architecture that enables tri-directional balanced data reuse. Voltra further integrates a hybrid-granularity prefetching scheme and a dynamic shared memory allocation mechanism to significantly enhance spatiotemporal data utilization efficiency. Implemented in 16nm technology, Voltra achieves an energy efficiency of 1.60 TOPS/W and an area efficiency of 1.25 TOPS/mm². Compared to conventional 2D architectures, it demonstrates a 2.0× improvement in spatial utilization, 2.12–2.94× higher temporal utilization, and end-to-end latency reductions of 1.15–2.36× across representative workloads.
📝 Abstract
Achieving high compute utilization across a wide range of AI workloads is crucial for the efficiency of versatile DNN accelerators. This paper presents the Voltra chip and its utilization-optimised DNN accelerator architecture, which leverages 3-Dimensional (3D) spatial data reuse along with efficient and flexible shared memory access. The 3D spatial dataflow enables balanced spatial data reuse across three dimensions, improving spatial utilization by up to 2.0x compared to a conventional 2D design. Inside the shared memory access architecture, Voltra incorporates flexible data streamers that enable mixed-grained hardware data pre-fetching and dynamic memory allocation, further improving the temporal utilization by 2.12-2.94x and achieving 1.15-2.36x total latency speedup compared with the non-prefetching and separated memory architecture, respectively. Fabricated in 16nm technology, our chip achieves 1.60 TOPS/W peak system energy efficiency and 1.25 TOPS/mm2 system area efficiency, which is competitive with state-of-the-art solutions while achieving high utilization across diverse workloads.