Prefill vs. Decode Bottlenecks: SRAM-Frequency Tradeoffs and the Memory-Bandwidth Ceiling

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) inference exhibits divergent energy-efficiency bottlenecks across the compute-intensive prefill and memory-bound decode phases, with on-chip SRAM capacity and operating frequency critically impacting both energy consumption and latency. Method: We develop a co-simulation framework integrating OpenRAM (for SRAM static/dynamic power modeling), LLMCompass (for end-to-end latency simulation), and ScaleSIM (for computational intensity analysis). Contribution/Results: We find that increasing SRAM capacity exacerbates static power overhead; modestly raising core frequency (1200–1400 MHz) significantly reduces total energy; and the optimal configuration employs small SRAM (32–64 KB) paired with high frequency. Quantitative analysis reveals memory bandwidth as the fundamental ceiling for decode-phase acceleration. These findings yield actionable, architecture-level guidelines for energy–performance co-optimization of datacenter-scale LLM accelerators.

Technology Category

Application Category

📝 Abstract
Energy consumption dictates the cost and environmental impact of deploying Large Language Models. This paper investigates the impact of on-chip SRAM size and operating frequency on the energy efficiency and performance of LLM inference, focusing on the distinct behaviors of the compute-bound prefill and memory-bound decode phases. Our simulation methodology combines OpenRAM for energy modeling, LLMCompass for latency simulation, and ScaleSIM for systolic array operational intensity. Our findings show that total energy use is predominantly determined by SRAM size in both phases, with larger buffers significantly increasing static energy due to leakage, which is not offset by corresponding latency benefits. We quantitatively explore the memory-bandwidth bottleneck, demonstrating that while high operating frequencies reduce prefill latency, their positive impact on memory-bound decode latency is capped by the external memory bandwidth. Counter-intuitively, high compute frequency can reduce total energy by reducing execution time and consequently decreasing static energy consumption more than the resulting dynamic power increase. We identify an optimal hardware configuration for the simulated workload: high operating frequencies (1200MHz-1400MHz) and a small local buffer size of 32KB to 64KB. This combination achieves the best energy-delay product, balancing low latency with high energy efficiency. Furthermore, we demonstrate how memory bandwidth acts as a performance ceiling, and that increasing compute frequency only yields performance gains up to the point where the workload becomes memory-bound. This analysis provides concrete architectural insights for designing energy-efficient LLM accelerators, especially for datacenters aiming to minimize their energy overhead.
Problem

Research questions and friction points this paper is trying to address.

Investigates SRAM size and frequency impact on LLM inference energy efficiency
Explores memory-bandwidth bottleneck limiting decode phase performance gains
Identifies optimal hardware configuration balancing energy and latency for accelerators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes SRAM size and frequency for energy efficiency
Uses simulation tools for latency and energy modeling
Identifies memory bandwidth as a performance ceiling
🔎 Similar Papers
No similar papers found.