Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

📅 2019-05-28
🏛️ Design Automation Conference
📈 Citations: 76
Influential: 8
📄 PDF
🤖 AI Summary
Deep neural networks (DNNs) exhibit parameter-level security vulnerabilities in high-reliability applications, where malicious parameter modifications can induce targeted misclassifications without compromising overall accuracy. Method: This paper proposes a stealthy parameter attack framework that minimizes an L₀/L₂ hybrid norm perturbation to precisely alter model parameters—ensuring a given input is misclassified into a specified target label while preserving the original predictions for all other samples. The method jointly optimizes parameter perturbations and classification constraints via the Alternating Direction Method of Multipliers (ADMM). Contribution/Results: To our knowledge, this is the first work achieving “fault-hiding” attacks: injecting multiple sneaking faults while reducing overall test accuracy by less than 0.1%, rendering the attack highly imperceptible. Extensive evaluations across mainstream DNN architectures and benchmark datasets demonstrate both efficacy and exceptional stealthiness, offering novel insights for DNN robustness assessment and adversarial defense research.
📝 Abstract
Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability. We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using 0 norm to measure the number of modifications and 2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.CCS CONCEPTS•Security and privacy → Domain-specific security and privacy architectures;Network security; •Networks → Network performance analysis; •Theory of computation → Theory and algorithms for application domains;
Problem

Research questions and friction points this paper is trying to address.

Vulnerability of DNNs to stealthy parameter modifications
Misclassification of images while maintaining overall accuracy
Minimizing parameter changes to hide adversarial faults
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ADMM for DNN parameter optimization
Minimizes parameter modifications via L0 and L2 norms
Maintains model accuracy while injecting faults
🔎 Similar Papers
No similar papers found.
P
Pu Zhao
Northeastern University, Boston, Massachusetts
S
Siyue Wang
Northeastern University, Boston, Massachusetts
Cheng Gongye
Cheng Gongye
Nvidia
Hardware SecurityDeep Neural Network
Y
Yanzhi Wang
Northeastern University, Boston, Massachusetts
Yunsi Fei
Yunsi Fei
Professor of Electrical and Computer Engineering, Northeastern University
hardware securityEDAcomputer architectureembedded systemsmachine learning systems
X
X. Lin
Northeastern University, Boston, Massachusetts