Pagoda: An Energy and Time Roofline Study for DNN Workloads on Edge Accelerators

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing work lacks systematic modeling of the energy-efficiency–performance trade-offs for DNN inference and training on edge accelerators (e.g., Jetson Orin AGX) across diverse power modes. Method: This paper introduces, for the first time, a joint time-roofline and energy-roofline modeling framework that integrates FLOP-based computation analysis with byte-level memory access characterization, enabling first-principles modeling of hardware behavior under multiple power states. Contribution/Results: We demonstrate that MAXN is not the optimal energy-efficiency operating point and that time efficiency inherently implies energy efficiency. The model uncovers several counterintuitive power–performance relationships and generalizes to training workloads. Experimental evaluation shows that power-mode optimization guided by our model reduces energy consumption by up to 15% while maintaining near-constant inference latency—significantly enhancing deployment efficiency for energy-constrained edge AI applications.

Technology Category

Application Category

📝 Abstract
Edge accelerators such as Nvidia Jetsons are becoming an integral part of the computing continuum, and are often used for DNN inferencing and training. Nvidia Jetson edge devices have $2000$+ CUDA cores within a $70$W power envelope and offer $1000$s of power modes to customize CPU, GPU and memory frequencies. Their widely varying power--performance trade-offs can be exploited for energy and power-constrained deployments. While data-driven methods to predict the power and latency of DNN workloads for edge devices exist, there is a lack of principled study to understand why edge accelerators and their power modes perform the way they do. We develop a time roofline and a novel energy roofline model for the Jetson Orin AGX for diverse power modes, and couple it with an analytical model of the compute (FLOP) and memory access (bytes) for DNN inference workloads to analyze them from first principles. These reveal unique, sometimes counter-intuitive, insights into the power and performance behavior of DNN workloads on edge accelerators, e.g., the default power mode MAXN is not the most energy efficient and time efficiency implies energy efficiency for all power modes. We also extend our analytical roofline models to DNN training. Finally, we apply these methods to tune the power mode (and hence the roofline) of the edge device to optimize the latency and energy for DNN inference, with up to $15%$ lower energy and minimal degradation in inference time.
Problem

Research questions and friction points this paper is trying to address.

Analyzing power-performance trade-offs in edge accelerators for DNN workloads
Developing energy and time roofline models to understand accelerator behavior
Optimizing power modes to reduce energy consumption with minimal latency impact
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed time and energy roofline models for Jetson Orin AGX
Coupled roofline models with analytical DNN workload analysis
Applied models to optimize power modes for energy efficiency
🔎 Similar Papers
No similar papers found.
P
Prashanthi S. K.
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
K
Kunal Kumar Sahoo
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
A
Amartya Ranjan Saikia
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
Pranav Gupta
Pranav Gupta
Assistant Professor, Gies College of Business, UIUC
Collective IntelligenceHuman-AI TeamingTransactive AttentionDigital Nudging
A
Atharva Vinay Joshi
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
P
Priyanshu Pansari
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
Yogesh Simmhan
Yogesh Simmhan
Associate Professor, Indian Institute of Science
Distributed SystemsEdge AcceleratorsGraph AnalyticsCloud ComputingFederated Learning