Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks

📅 2025-01-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embedded neural networks are vulnerable to side-channel attacks, which can leak sensitive parameters such as weights. To address this threat, this paper proposes a side-channel-resistant Fisher–Yates shuffling algorithm. Our method is the first to integrate masking techniques with Blakely’s modular multiplication, thereby eliminating power leakage induced by conventional modular division operations. Furthermore, we enhance side-channel resistance through information-hiding-based shuffling implementation. Correlation Power Analysis (CPA) on an ARM Cortex-M4 platform demonstrates that the proposed scheme effectively resists CPA attacks. In terms of overhead, it incurs only twice the memory footprint of the largest layer’s parameter count and introduces merely 0.49%–4% computational overhead. Thus, our approach achieves a favorable trade-off between side-channel security and implementation efficiency.

Technology Category

Application Category

📝 Abstract
Neural network models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are already available countermeasure methods currently used for protecting cryptographic implementations that can be tailored to protect embedded neural network models. Shuffling, a hiding-based countermeasure that randomly shuffles the order of computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm is used. In this paper, we propose a design of an SCA-secure version of the Fisher-Yates algorithm. By integrating the masking technique for modular reduction and Blakely's method for modular multiplication, we effectively remove the vulnerability in the division operation that led to side-channel leakage in the original version of the algorithm. We experimentally evaluate that the countermeasure is effective against SCA by implementing a correlation power analysis attack on an embedded neural network model implemented on ARM Cortex-M4. Compared to the original proposal, the memory overhead is $2 imes$ the biggest layer of the network, while the time overhead varies from $4%$ to $0.49%$ for a layer with $100$ and $1000$ neurons, respectively.
Problem

Research questions and friction points this paper is trying to address.

Neural Network Security
Side-Channel Attacks
Embedded Devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced Fisher-Yates Algorithm
Side-Channel Attack Prevention
Neural Network Security
🔎 Similar Papers
No similar papers found.
L
Leonard Puvskavc
Slovak University of Technology, Bratislava, Slovakia
M
Marek Benovivc
Slovak University of Technology, Bratislava, Slovakia
J
J. Breier
TTControl GmbH, Vienna, Austria
Xiaolu Hou
Xiaolu Hou
Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
Cryptography Hardware SecurityAI Security