The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models

๐Ÿ“… 2026-02-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the lack of systematic evaluation on the sensitivity of different numerical representations in embedded deep learning models under electromagnetic fault injection (EMFI) attacks. For the first time, it systematically compares the fault tolerance of 32/16-bit floating-point and 8/4-bit integer quantized parameters across ResNet-18/34/50 and VGG-11 models, using a low-cost EMFI platform to inject faults into low-power embedded chips. The results demonstrate that integer quantization significantly enhances model robustness: while floating-point models suffer near-complete accuracy collapse after a single fault injection, the 8-bit integer VGG-11 maintains approximately 70% Top-1 and 90% Top-5 accuracy, with this advantage becoming especially pronounced in larger networks.

Technology Category

Application Category

๐Ÿ“ Abstract
Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. Especially, when considering the the 8-bit representation on a relatively large network (VGG-11), the Top-1 accuracies stay at around 70% and the Top-5 at around 90%.
Problem

Research questions and friction points this paper is trying to address.

EMFI
number representation
fault injection
embedded deep learning
model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

EMFI
number representation
fault injection
embedded deep learning
model robustness
๐Ÿ”Ž Similar Papers
No similar papers found.