🤖 AI Summary
3D-stacked architectures face severe memory bandwidth bottlenecks and thermal challenges—including high peak temperatures, large thermal gradients, and poor scalability—when accelerating large language model (LLM) inference. To address these issues, this work proposes a thermal-aware heterogeneous computing architecture. It is the first to jointly optimize cross-stack thermal management and heterogeneous core design (integrating high-performance and high-efficiency cores), co-locating 3D-integrated DRAM with logic dies, and introducing a dynamic bandwidth-sharing scheduling mechanism to synergistically optimize compute-intensive and memory-intensive workloads. The architecture significantly improves thermal distribution uniformity and hardware resource utilization. Evaluated on Llama-65B and GPT-3 66B, it achieves 2.85× and 2.21× speedup over state-of-the-art GPUs and PIM accelerators, respectively, while reducing peak temperature by up to 9.37°C.
📝 Abstract
The autoregressive decoding in LLMs is the major inference bottleneck due to the memory-intensive operations and limited hardware bandwidth. 3D-stacked architecture is a promising solution with significantly improved memory bandwidth, which vertically stacked multi DRAM dies on top of logic die. However, our experiments also show the 3D-stacked architecture faces severer thermal issues compared to 2D architecture, in terms of thermal temperature, gradient and scalability. To better exploit the potential of 3D-stacked architecture, we present Tasa, a heterogeneous architecture with cross-stack thermal optimizations to balance the temperature distribution and maximize the performance under the thermal constraints. High-performance core is designed for compute-intensive operations, while high-efficiency core is used for memory-intensive operators, e.g. attention layers. Furthermore, we propose a bandwidth sharing scheduling to improve the bandwidth utilization in such heterogeneous architecture. Extensive thermal experiments show that our Tasa architecture demonstrates greater scalability compared with the homogeneous 3D-stacked architecture, i.e. up to 5.55 $ ccentigrade$, 9.37 $ ccentigrade$, and 7.91 $ ccentigrade$ peak temperature reduction for 48, 60, and 72 core configurations. Our experimental for Llama-65B and GPT-3 66B inferences also demonstrate 2.85x and 2.21x speedup are obtained over the GPU baselines and state-of-the-art heterogeneous PIM-based LLM accelerator