Camel: Energy-Aware LLM Inference on Resource-Constrained Devices

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly minimizing energy consumption and inference latency when deploying large language models (LLMs) on resource-constrained edge devices, this paper proposes an energy-aware inference management framework. Implemented on the NVIDIA Jetson AGX Orin platform, the framework jointly optimizes GPU clock frequency and batch size, incorporating a novel exploration-exploitation mechanism for efficient configuration search, with the energy-delay product (EDP) as the primary optimization objective. Experimental results demonstrate that, compared to default configurations, the method reduces EDP by 12.4%–29.9% across diverse LLMs and workloads, significantly improving energy efficiency. The key contribution lies in the first integration of dynamic hardware configuration search with EDP-driven, multi-dimensional co-optimization—enabling real-time, energy-efficient, and adaptive LLM inference scheduling under stringent edge constraints.

Technology Category

Application Category

📝 Abstract
Most Large Language Models (LLMs) are currently deployed in the cloud, with users relying on internet connectivity for access. However, this paradigm faces challenges such as network latency, privacy concerns, and bandwidth limits. Thus, deploying LLMs on edge devices has become an important research focus. In edge inference, request latency is critical as high latency can impair real-time tasks. At the same time, edge devices usually have limited battery capacity, making energy consumption another major concern. Balancing energy consumption and inference latency is essential. To address this, we propose an LLM inference energy management framework that optimizes GPU frequency and batch size to balance latency and energy consumption. By effectively managing the exploration-exploitation dilemma in configuration search, the framework finds the optimal settings. The framework was implemented on the NVIDIA Jetson AGX Orin platform, and a series of experimental validations were conducted. Results demonstrate that, compared to the default configuration, our framework reduces energy delay product (EDP) by 12.4%-29.9%, achieving a better balance between energy consumption and latency.
Problem

Research questions and friction points this paper is trying to address.

Balancing energy consumption and latency in edge LLM inference
Optimizing GPU frequency and batch size for efficient inference
Reducing energy delay product on resource-constrained edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes GPU frequency for energy efficiency
Adjusts batch size to balance latency
Manages exploration-exploitation in configuration search
🔎 Similar Papers
No similar papers found.
H
Hao Xu
College of Computer Science and Technology, National University of Defense Technology
Long Peng
Long Peng
China Electric Power Research Institute
LCC-HVDC and VSC-HVDC Transmission Technologies
Shezheng Song
Shezheng Song
NUDT
X
Xiaodong Liu
College of Computer Science and Technology, National University of Defense Technology
M
Ma Jun
College of Computer Science and Technology, National University of Defense Technology
S
Shasha Li
College of Computer Science and Technology, National University of Defense Technology
J
Jie Yu
College of Computer Science and Technology, National University of Defense Technology
X
Xiaoguang Mao
College of Computer Science and Technology, National University of Defense Technology