Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A lack of energy-efficiency evaluation methodologies for AI inference on RISC-V architectures hinders sustainable deployment of AI workloads. Method: We conduct the first fine-grained, cross-framework energy benchmarking study—covering PyTorch, ONNX Runtime, and TensorFlow—on a production-grade 64-core SOPHON SG2042 RISC-V server, with hardware-level power monitoring and comparative analysis of XNNPACK versus OpenBLAS backends. Contribution/Results: Backend selection proves decisive for energy efficiency: enabling XNNPACK in ONNX Runtime and TensorFlow reduces average inference energy consumption by 27.3% relative to PyTorch with OpenBLAS. This work identifies critical energy bottlenecks in RISC-V AI frameworks and establishes XNNPACK as the preferred high-efficiency backend. It provides empirical evidence and actionable optimization guidance for low-carbon AI deployment across open-source ecosystems on RISC-V servers.

Technology Category

Application Category

📝 Abstract
In today's era of rapid technological advancement, artificial intelligence (AI) applications require large-scale, high-performance, and data-intensive computations, leading to significant energy demands. Addressing this challenge necessitates a combined approach involving both hardware and software innovations. Hardware manufacturers are developing new, efficient, and specialized solutions, with the RISC-V architecture emerging as a prominent player due to its open, extensible, and energy-efficient instruction set architecture (ISA). Simultaneously, software developers are creating new algorithms and frameworks, yet their energy efficiency often remains unclear. In this study, we conduct a comprehensive benchmark analysis of machine learning (ML) applications on the 64-core SOPHON SG2042 RISC-V architecture. We specifically analyze the energy consumption of deep learning inference models across three leading AI frameworks: PyTorch, ONNX Runtime, and TensorFlow. Our findings show that frameworks using the XNNPACK back-end, such as ONNX Runtime and TensorFlow, consume less energy compared to PyTorch, which is compiled with the native OpenBLAS back-end.
Problem

Research questions and friction points this paper is trying to address.

Analyzing energy consumption of AI frameworks on RISC-V CPU
Comparing energy efficiency of PyTorch, ONNX Runtime, TensorFlow
Evaluating impact of back-end choices on AI framework energy use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark ML on 64-core RISC-V CPU
Compare energy use of PyTorch, ONNX, TensorFlow
XNNPACK back-end reduces energy consumption
🔎 Similar Papers
No similar papers found.
G
Giulio Malenza
University of Torino, Italy
F
Francesco Targa
University of Torino, Italy
A
Adriano Marques Garcia
University of Torino, Italy
Marco Aldinucci
Marco Aldinucci
Full Professor in Computer Science, University of Torino
Parallel programming modelsparallel programmingRuntime SystemsHPCcloud engineering
Robert Birke
Robert Birke
Università degli Studi Di Torino