No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the structural fragility of deep neural networks (DNNs) under zero-data, zero-training, and zero-optimization conditions. We propose Deep Neural Lesion (DNL), a lightweight bit-level attack that induces catastrophic accuracy degradation (up to 99.8% drop on ResNet50-ImageNet) by flipping merely two sign bits—requiring only a single forward/backward pass. Our key contribution is the first systematic identification and exploitation of the inherent sensitivity of parameter sign bits, achieved via gradient-sign attribution and minimal bit perturbation injection, enabling cross-layer deployment across hardware, firmware, and software. Extensive experiments confirm DNL’s efficacy and universality across diverse computer vision models and datasets. Furthermore, we introduce a vulnerability-aware defense strategy: protecting fewer than 0.1% of sign bits—identified via our sensitivity analysis—significantly enhances model robustness against such bit-level attacks.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) can be catastrophically disrupted by flipping only a handful of sign bits in their parameters. We introduce Deep Neural Lesion (DNL), a data-free, lightweight method that locates these critical parameters and triggers massive accuracy drops. We validate its efficacy on a wide variety of computer vision models and datasets. The method requires no training data or optimization and can be carried out via common exploits software, firmware or hardware based attack vectors. An enhanced variant that uses a single forward and backward pass further amplifies the damage beyond DNL's zero-pass approach. Flipping just two sign bits in ResNet50 on ImageNet reduces accuracy by 99.8%. We also show that selectively protecting a small fraction of vulnerable sign bits provides a practical defense against such attacks.
Problem

Research questions and friction points this paper is trying to address.

Disrupts neural networks by flipping sign bits.
Locates critical parameters without training data.
Protects vulnerable bits to defend against attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-free method disrupts neural networks
Sign-flips cause massive accuracy drops
No training or optimization required