Metrics and evaluations for computational and sustainable AI efficiency

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI inference evaluation lacks standardized, reproducible cross-platform methodologies that jointly account for performance, energy efficiency, and carbon emissions. To address this, we propose the first multi-dimensional evaluation framework incorporating accuracy constraints, enabling systematic quantification of latency, throughput, energy consumption, and carbon footprint under realistic service conditions. Our framework supports joint analysis across heterogeneous hardware (e.g., GH200, RTX 4090), software stacks (PyTorch, TensorRT, ONNX Runtime), and numerical precisions—including multi-level quantization. By constructing an efficiency–carbon emission Pareto frontier, we establish the first principled basis for comparing and sustainably optimizing AI systems across diverse deployment environments. All tools are open-sourced to ensure transparency and independent validation. This work provides an empirical foundation and actionable optimization pathways for green AI deployment.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Artificial Intelligence (AI) has created unprecedented demands for computational power, yet methods for evaluating the performance, efficiency, and environmental impact of deployed models remain fragmented. Current approaches often fail to provide a holistic view, making it difficult to compare and optimise systems across heterogeneous hardware, software stacks, and numeric precisions. To address this gap, we propose a unified and reproducible methodology for AI model inference that integrates computational and environmental metrics under realistic serving conditions. Our framework provides a pragmatic, carbon-aware evaluation by systematically measuring latency and throughput distributions, energy consumption, and location-adjusted carbon emissions, all while maintaining matched accuracy constraints for valid comparisons. We apply this methodology to multi-precision models across diverse hardware platforms, from data-centre accelerators like the GH200 to consumer-level GPUs such as the RTX 4090, running on mainstream software stacks including PyTorch, TensorRT, and ONNX Runtime. By systematically categorising these factors, our work establishes a rigorous benchmarking framework that produces decision-ready Pareto frontiers, clarifying the trade-offs between accuracy, latency, energy, and carbon. The accompanying open-source code enables independent verification and facilitates adoption, empowering researchers and practitioners to make evidence-based decisions for sustainable AI deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating fragmented AI performance and environmental impact metrics
Providing holistic comparisons across heterogeneous hardware and software
Establishing rigorous benchmarking for accuracy-latency-energy-carbon tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified methodology integrates computational and environmental metrics
Systematically measures latency, energy, and carbon emissions
Rigorous benchmarking framework produces decision-ready Pareto frontiers
🔎 Similar Papers
No similar papers found.