🤖 AI Summary
To address the challenge of jointly minimizing energy consumption and inference latency when deploying large language models (LLMs) on resource-constrained edge devices, this paper proposes an energy-aware inference management framework. Implemented on the NVIDIA Jetson AGX Orin platform, the framework jointly optimizes GPU clock frequency and batch size, incorporating a novel exploration-exploitation mechanism for efficient configuration search, with the energy-delay product (EDP) as the primary optimization objective. Experimental results demonstrate that, compared to default configurations, the method reduces EDP by 12.4%–29.9% across diverse LLMs and workloads, significantly improving energy efficiency. The key contribution lies in the first integration of dynamic hardware configuration search with EDP-driven, multi-dimensional co-optimization—enabling real-time, energy-efficient, and adaptive LLM inference scheduling under stringent edge constraints.
📝 Abstract
Most Large Language Models (LLMs) are currently deployed in the cloud, with users relying on internet connectivity for access. However, this paradigm faces challenges such as network latency, privacy concerns, and bandwidth limits. Thus, deploying LLMs on edge devices has become an important research focus. In edge inference, request latency is critical as high latency can impair real-time tasks. At the same time, edge devices usually have limited battery capacity, making energy consumption another major concern. Balancing energy consumption and inference latency is essential. To address this, we propose an LLM inference energy management framework that optimizes GPU frequency and batch size to balance latency and energy consumption. By effectively managing the exploration-exploitation dilemma in configuration search, the framework finds the optimal settings. The framework was implemented on the NVIDIA Jetson AGX Orin platform, and a series of experimental validations were conducted. Results demonstrate that, compared to the default configuration, our framework reduces energy delay product (EDP) by 12.4%-29.9%, achieving a better balance between energy consumption and latency.