🤖 AI Summary
This work addresses the nonlinear impact of input and output sequence lengths on the energy efficiency of large language model (LLM) inference, a phenomenon inadequately captured by conventional linear estimation methods. By analyzing the computational and memory access complexity inherent in the Transformer architecture, the authors develop an analytical model that accurately characterizes energy efficiency across varying sequence lengths. Validated on NVIDIA H100 GPUs using the TensorRT-LLM framework with models ranging from 1B to 9B parameters, the study reveals— for the first time—the existence of an energy-efficiency “sweet spot” during LLM inference, thereby challenging the prevailing assumption of linear energy consumption. Within token lengths from 64 to 4096, the model achieves an average mean absolute percentage error (MAPE) of only 1.79%, offering both theoretical grounding and practical guidance for energy-efficient, green AI deployment.
📝 Abstract
Large Language Models (LLMs) inference is central in modern AI applications, making it critical to understand their energy footprint. Existing approaches typically estimate energy consumption through simple linear functions of input and output sequence lengths, yet our observations reveal clear Energy Efficiency regimes: peak efficiency occurs with short-to-moderate inputs and medium-length outputs, while efficiency drops sharply for long inputs or very short outputs, indicating a non-linear dependency. In this work, we propose an analytical model derived from the computational and memory-access complexity of the Transformer architecture, capable of accurately characterizing the efficiency curve as a function of input and output lengths. To assess its accuracy, we evaluate energy consumption using TensorRT-LLM on NVIDIA H100 GPUs across a diverse set of LLMs ranging from 1B to 9B parameters, including OPT, LLaMA, Gemma, Falcon, Qwen2, and Granite, tested over input and output lengths from 64 to 4096 tokens, achieving a mean MAPE of 1.79%. Our results show that aligning sequence lengths with these efficiency"Sweet Spots"can substantially reduce energy usage, supporting informed truncation, summarization, and adaptive generation strategies in production systems.