Determining Energy Efficiency Sweet Spots in Production LLM Inference

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the nonlinear impact of input and output sequence lengths on the energy efficiency of large language model (LLM) inference, a phenomenon inadequately captured by conventional linear estimation methods. By analyzing the computational and memory access complexity inherent in the Transformer architecture, the authors develop an analytical model that accurately characterizes energy efficiency across varying sequence lengths. Validated on NVIDIA H100 GPUs using the TensorRT-LLM framework with models ranging from 1B to 9B parameters, the study reveals— for the first time—the existence of an energy-efficiency “sweet spot” during LLM inference, thereby challenging the prevailing assumption of linear energy consumption. Within token lengths from 64 to 4096, the model achieves an average mean absolute percentage error (MAPE) of only 1.79%, offering both theoretical grounding and practical guidance for energy-efficient, green AI deployment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) inference is central in modern AI applications, making it critical to understand their energy footprint. Existing approaches typically estimate energy consumption through simple linear functions of input and output sequence lengths, yet our observations reveal clear Energy Efficiency regimes: peak efficiency occurs with short-to-moderate inputs and medium-length outputs, while efficiency drops sharply for long inputs or very short outputs, indicating a non-linear dependency. In this work, we propose an analytical model derived from the computational and memory-access complexity of the Transformer architecture, capable of accurately characterizing the efficiency curve as a function of input and output lengths. To assess its accuracy, we evaluate energy consumption using TensorRT-LLM on NVIDIA H100 GPUs across a diverse set of LLMs ranging from 1B to 9B parameters, including OPT, LLaMA, Gemma, Falcon, Qwen2, and Granite, tested over input and output lengths from 64 to 4096 tokens, achieving a mean MAPE of 1.79%. Our results show that aligning sequence lengths with these efficiency"Sweet Spots"can substantially reduce energy usage, supporting informed truncation, summarization, and adaptive generation strategies in production systems.
Problem

Research questions and friction points this paper is trying to address.

Energy Efficiency
Large Language Models
LLM Inference
Non-linear Dependency
Sequence Length
Innovation

Methods, ideas, or system contributions that make the work stand out.

energy efficiency
LLM inference
sweet spot
Transformer complexity
sequence length optimization
🔎 Similar Papers
No similar papers found.
H
Hiari Pizzini Cavagna
University of Bologna
A
Andrea Proia
University of Bologna
G
Giacomo Madella
University of Bologna
G
Giovanni B. Esposito
University of Bologna
F
Francesco Antici
University of Bologna
Daniele Cesarini
Daniele Cesarini
Project Manager & HPC Technology Specialist, CINECA
High Performance ComputingHeterogeneous ComputingParallel Programming ModelsRuntime SystemsPower and Thermal Management
Z
Z. Kiziltan
University of Bologna
Andrea Bartolini
Andrea Bartolini
Associate Professor, University of Bologna
Energy managementThermal managementNear-Threshold ComputingHigh Performance Computing