Real-world Edge Neural Network Implementations Leak Private Interactions Through Physical Side Channel

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper uncovers a critical security vulnerability wherein edge neural networks—when deployed on real hardware (FPGAs and Raspberry Pi devices)—leak sensitive user interaction data (e.g., inputs, outputs, and LLM-generated tokens) via electromagnetic (EM) side-channel radiation. Method: We propose ScaAR, the first implementation-agnostic EM side-channel attack framework, integrating deep learning–assisted side-channel analysis (DLSCA), EM signal modeling, and black-box hardware reverse inference. Contribution/Results: ScaAR enables the first token-level EM analysis of edge large language models on Raspberry Pi 5, demonstrating statistically significant token distinguishability. Experiments successfully recover classifier output labels on ZCU104 FPGA and Raspberry Pi 3B; on Raspberry Pi 5, it achieves token identification accuracy substantially exceeding random baseline performance. These results establish a novel empirical paradigm for security evaluation of edge AI systems, highlighting previously overlooked physical-layer threats in resource-constrained deployments.

Technology Category

Application Category

📝 Abstract
Neural networks have become a fundamental component of numerous practical applications, and their implementations, which are often accelerated by hardware, are integrated into all types of real-world physical devices. User interactions with neural networks on hardware accelerators are commonly considered privacy-sensitive. Substantial efforts have been made to uncover vulnerabilities and enhance privacy protection at the level of machine learning algorithms, including membership inference attacks, differential privacy, and federated learning. However, neural networks are ultimately implemented and deployed on physical devices, and current research pays comparatively less attention to privacy protection at the implementation level. In this paper, we introduce a generic physical side-channel attack, ScaAR, that extracts user interactions with neural networks by leveraging electromagnetic (EM) emissions of physical devices. Our proposed attack is implementation-agnostic, meaning it does not require the adversary to possess detailed knowledge of the hardware or software implementations, thanks to the capabilities of deep learning-based side-channel analysis (DLSCA). Experimental results demonstrate that, through the EM side channel, ScaAR can effectively extract the class label of user interactions with neural classifiers, including inputs and outputs, on the AMD-Xilinx MPSoC ZCU104 FPGA and Raspberry Pi 3 B. In addition, for the first time, we provide side-channel analysis on edge Large Language Model (LLM) implementations on the Raspberry Pi 5, showing that EM side channel leaks interaction data, and different LLM tokens can be distinguishable from the EM traces.
Problem

Research questions and friction points this paper is trying to address.

Privacy Leakage
Edge Neural Networks
Electromagnetic Emissions
Innovation

Methods, ideas, or system contributions that make the work stand out.

ScaAR
Electromagnetic Radiation Analysis
Neural Network Interaction Inference
🔎 Similar Papers
No similar papers found.
Z
Zhuoran Liu
Radboud University
S
Senna van Hoek
Radboud University
Péter Horváth
Péter Horváth
Radboud University
Side Channel Attacks
D
Dirk Lauret
Radboud University
Xiaoyun Xu
Xiaoyun Xu
Radboud University
AI security
L
Lejla Batina
Radboud University