🤖 AI Summary
The high energy consumption of large language models (LLMs) severely hinders their sustainable deployment, yet existing energy-efficiency evaluations rely heavily on idealized benchmarks that poorly reflect real-world production workloads. To address this gap, we propose the first energy-efficiency benchmarking framework tailored to realistic LLM inference loads. Built upon vLLM, it establishes a multi-concurrent, dynamically scheduled testbed that emulates production-grade request patterns. We systematically measure and analyze energy consumption across diverse model scales, architectures, and inference workloads. Through cross-model and cross-configuration empirical studies, we quantitatively uncover previously unreported nonlinear relationships between energy efficiency and key factors—including parameter count, attention mechanism design, and hardware utilization. This work not only demonstrates the feasibility of production-relevant energy-efficiency assessment but also delivers a reproducible, extensible quantitative toolkit and actionable optimization guidelines—laying both methodological foundations and practical evidence for green AI systems.
📝 Abstract
The prevalence of Large Language Models (LLMs) is having an growing impact on the climate due to the substantial energy required for their deployment and use. To create awareness for developers who are implementing LLMs in their products, there is a strong need to collect more information about the energy efficiency of LLMs. While existing research has evaluated the energy efficiency of various models, these benchmarks often fall short of representing realistic production scenarios. In this paper, we introduce the LLM Efficiency Benchmark, designed to simulate real-world usage conditions. Our benchmark utilizes vLLM, a high-throughput, production-ready LLM serving backend that optimizes model performance and efficiency. We examine how factors such as model size, architecture, and concurrent request volume affect inference energy efficiency. Our findings demonstrate that it is possible to create energy efficiency benchmarks that better reflect practical deployment conditions, providing valuable insights for developers aiming to build more sustainable AI systems.