Offline Reinforcement-Learning-Based Power Control for Application-Agnostic Energy Efficiency

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes the first application-agnostic CPU power management approach based on offline reinforcement learning, circumventing the challenges of online methodsβ€”such as difficulties in environment modeling, system disturbances, and safety risks. By leveraging historical policy data, the controller is trained without requiring prior knowledge of target applications, utilizing readily available system signals including Intel RAPL power measurements, hardware performance counters, and runtime heartbeat indicators. The method achieves generalizable energy efficiency across diverse workloads while maintaining computational reliability. Experimental evaluation on a range of compute- and memory-intensive benchmarks demonstrates substantial energy savings with only modest and acceptable performance overhead, effectively balancing scientific computing fidelity and energy conservation.

Technology Category

Application Category

πŸ“ Abstract
Energy efficiency has become an integral aspect of modern computing infrastructure design, impacting the performance, cost, scalability, and durability of production systems. The incorporation of power actuation and sensing capabilities in CPU designs is indicative of this, enabling the deployment of system software that can actively monitor and adjust energy consumption and performance at runtime. While reinforcement learning (RL) would seem ideal for the design of such energy efficiency control systems, online training presents challenges ranging from the lack of proper models for setting up an adequate simulated environment, to perturbation (noise) and reliability issues, if training is deployed on a live system. In this paper we discuss the use of offline reinforcement learning as an alternative approach for the design of an autonomous CPU power controller, with the goal of improving the energy efficiency of parallel applications at runtime without unduly impacting their performance. Offline RL sidesteps the issues incurred by online RL training by leveraging a dataset of state transitions collected from arbitrary policies prior to training. Our methodology applies offline RL to a gray-box approach to energy efficiency, combining online application-agnostic performance data (e.g., heartbeats) and hardware performance counters to ensure that the scientific objectives are met with limited performance degradation. Evaluating our method on a variety of compute-bound and memory-bound benchmarks and controlling power on a live system through Intel's Running Average Power Limit, we demonstrate that such an offline-trained agent can substantially reduce energy consumption at a tolerable performance degradation cost.
Problem

Research questions and friction points this paper is trying to address.

energy efficiency
power control
offline reinforcement learning
application-agnostic
CPU power management
Innovation

Methods, ideas, or system contributions that make the work stand out.

offline reinforcement learning
power control
energy efficiency
application-agnostic
hardware performance counters
πŸ”Ž Similar Papers
No similar papers found.
A
Akhilesh Raj
Vanderbilt University
Swann Perarnau
Swann Perarnau
Argonne National Laboratory
Operating SystemsHPCSchedulingPerformance EvaluationMemory
A
A. Gokhale
Vanderbilt University
S
Solomon Bekele Abera
Argonne National Laboratory