Adversarial Evasion Attacks on Computer Vision using SHAP Values

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the vulnerability of computer vision models to adversarial evasion attacks by proposing a novel white-box attack method grounded in SHAP (Shapley Additive Explanations) values. The approach leverages SHAP during inference to quantify the contribution of individual input features to the model’s output, enabling the generation of highly imperceptible adversarial examples. As the first work to integrate SHAP values into adversarial attack strategies, the proposed method demonstrates robust performance even in scenarios where gradient information is limited or obscured. Experimental results show that, compared to the classical Fast Gradient Sign Method (FGSM), this technique achieves superior effectiveness and stability in inducing misclassification while maintaining high attack stealth.

Technology Category

Application Category

📝 Abstract
The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclassifications. Such attacks are particularly insidious as they can deceive the perception of an algorithm while eluding human perception due to their imperceptibility to the human eye. The proposed attack leverages SHAP values to quantify the significance of individual inputs to the output at the inference stage. A comparison is drawn between the SHAP attack and the well-known Fast Gradient Sign Method. We find evidence that SHAP attacks are more robust in generating misclassifications particularly in gradient hiding scenarios.
Problem

Research questions and friction points this paper is trying to address.

adversarial evasion attacks
computer vision
SHAP values
misclassification
imperceptibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

SHAP values
adversarial evasion attacks
white-box attack
gradient hiding
computer vision
🔎 Similar Papers
No similar papers found.
F
Frank Mollard
Business Intelligence & Data Science, Infraserv GmbH & Co. Höchst KG, Germany
M
Marcus Becker
International School of Management, Germany
Florian Röhrbein
Florian Röhrbein
TUC