Benchmarking Energy and Latency in TinyML: A Novel Method for Resource-Constrained AI

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
TinyML deployments on resource-constrained microcontrollers (MCUs) lack fine-grained, reproducible joint energy-efficiency and latency evaluation methodologies. Method: This paper introduces the first end-to-end benchmarking framework covering preprocessing, inference, and postprocessing stages. It enables battery-powered, full-stack power measurement, automated thousand-run testing, and cross-platform energy attribution analysis. Leveraging the NPU-integrated STM32N6 platform, the framework integrates on-die sensors, dynamic voltage and frequency scaling (DVFS), and statistically robust validation to simultaneously capture millisecond-level latency and microjoule-level energy consumption. Contribution/Results: Experiments reveal that reducing core voltage/frequency significantly improves preprocessing and postprocessing energy efficiency with negligible impact on inference latency. Energy-delay product (EDP) analysis quantifies stage-wise energy distribution, providing reproducible, quantitative insights for hardware-algorithm co-optimization.

Technology Category

Application Category

📝 Abstract
The rise of IoT has increased the need for on-edge machine learning, with TinyML emerging as a promising solution for resource-constrained devices such as MCU. However, evaluating their performance remains challenging due to diverse architectures and application scenarios. Current solutions have many non-negligible limitations. This work introduces an alternative benchmarking methodology that integrates energy and latency measurements while distinguishing three execution phases pre-inference, inference, and post-inference. Additionally, the setup ensures that the device operates without being powered by an external measurement unit, while automated testing can be leveraged to enhance statistical significance. To evaluate our setup, we tested the STM32N6 MCU, which includes a NPU for executing neural networks. Two configurations were considered: high-performance and Low-power. The variation of the EDP was analyzed separately for each phase, providing insights into the impact of hardware configurations on energy efficiency. Each model was tested 1000 times to ensure statistically relevant results. Our findings demonstrate that reducing the core voltage and clock frequency improve the efficiency of pre- and post-processing without significantly affecting network execution performance. This approach can also be used for cross-platform comparisons to determine the most efficient inference platform and to quantify how pre- and post-processing overhead varies across different hardware implementations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating TinyML performance on resource-constrained devices
Measuring energy and latency across inference phases
Comparing hardware configurations for energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel benchmarking integrates energy and latency measurements
Automated testing enhances statistical significance reliability
Analyzes EDP variation per phase for efficiency insights
Pietro Bartoli
Pietro Bartoli
Politecnico di Milano
Machine LearningTinyMLMicrocontrollerWearableSmart Eyewear
C
Christian Veronesi
Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
A
Andrea Giudici
Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
D
David Siorpaes
STMicroelectornics, Agrate Brianza (MB), Italy
D
Diana Trojaniello
Smart Eyewear Laboratory, EssilorLuxottica, Milan, Italy
Franco Zappa
Franco Zappa
Politecnico di Milano
Single Photon Avalanche Diode (SPAD)single photon detectionSPAD imagersmicroelectronic Instrumentation