🤖 AI Summary
Existing work lacks systematic modeling of the energy-efficiency–performance trade-offs for DNN inference and training on edge accelerators (e.g., Jetson Orin AGX) across diverse power modes.
Method: This paper introduces, for the first time, a joint time-roofline and energy-roofline modeling framework that integrates FLOP-based computation analysis with byte-level memory access characterization, enabling first-principles modeling of hardware behavior under multiple power states.
Contribution/Results: We demonstrate that MAXN is not the optimal energy-efficiency operating point and that time efficiency inherently implies energy efficiency. The model uncovers several counterintuitive power–performance relationships and generalizes to training workloads. Experimental evaluation shows that power-mode optimization guided by our model reduces energy consumption by up to 15% while maintaining near-constant inference latency—significantly enhancing deployment efficiency for energy-constrained edge AI applications.
📝 Abstract
Edge accelerators such as Nvidia Jetsons are becoming an integral part of the computing continuum, and are often used for DNN inferencing and training. Nvidia Jetson edge devices have $2000$+ CUDA cores within a $70$W power envelope and offer $1000$s of power modes to customize CPU, GPU and memory frequencies. Their widely varying power--performance trade-offs can be exploited for energy and power-constrained deployments. While data-driven methods to predict the power and latency of DNN workloads for edge devices exist, there is a lack of principled study to understand why edge accelerators and their power modes perform the way they do. We develop a time roofline and a novel energy roofline model for the Jetson Orin AGX for diverse power modes, and couple it with an analytical model of the compute (FLOP) and memory access (bytes) for DNN inference workloads to analyze them from first principles. These reveal unique, sometimes counter-intuitive, insights into the power and performance behavior of DNN workloads on edge accelerators, e.g., the default power mode MAXN is not the most energy efficient and time efficiency implies energy efficiency for all power modes. We also extend our analytical roofline models to DNN training. Finally, we apply these methods to tune the power mode (and hence the roofline) of the edge device to optimize the latency and energy for DNN inference, with up to $15%$ lower energy and minimal degradation in inference time.