Mini-Batch Robustness Verification of Deep Neural Networks

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural network local robustness verifiers suffer from high computational overhead and a trade-off between efficiency and precision, hindering scalable verification on large inputs. This paper proposes group-wise local robustness verification—the first approach to incorporate mini-batch mechanisms into robustness verification. It dynamically groups inputs whose behaviors within ε-balls are sufficiently similar and jointly verifies them, enabling analysis sharing and counterexample-guided refinement. The method supports both fully connected and convolutional networks, and comprises three core techniques: dynamic batch construction, adaptive batch sizing, and joint verification. Experiments on MNIST and CIFAR-10 demonstrate that our approach achieves an average 2.3× speedup over sample-wise verification (up to 4.1×), reducing total verification time from 24 hours to 6 hours—substantially improving efficiency without compromising verification accuracy.

Technology Category

Application Category

📝 Abstract
Neural network image classifiers are ubiquitous in many safety-critical applications. However, they are susceptible to adversarial attacks. To understand their robustness to attacks, many local robustness verifiers have been proposed to analyze $ε$-balls of inputs. Yet, existing verifiers introduce a long analysis time or lose too much precision, making them less effective for a large set of inputs. In this work, we propose a new approach to local robustness: group local robustness verification. The key idea is to leverage the similarity of the network computations of certain $ε$-balls to reduce the overall analysis time. We propose BaVerLy, a sound and complete verifier that boosts the local robustness verification of a set of $ε$-balls by dynamically constructing and verifying mini-batches. BaVerLy adaptively identifies successful mini-batch sizes, accordingly constructs mini-batches of $ε$-balls that have similar network computations, and verifies them jointly. If a mini-batch is verified, all $ε$-balls are proven robust. Otherwise, one $ε$-ball is suspected as not being robust, guiding the refinement. In the latter case, BaVerLy leverages the analysis results to expedite the analysis of that $ε$-ball as well as the other $ε$-balls in the batch. We evaluate BaVerLy on fully connected and convolutional networks for MNIST and CIFAR-10. Results show that BaVerLy scales the common one by one verification by 2.3x on average and up to 4.1x, in which case it reduces the total analysis time from 24 hours to 6 hours.
Problem

Research questions and friction points this paper is trying to address.

Verifying local robustness of neural networks efficiently
Reducing analysis time for large input sets verification
Improving precision in adversarial robustness verification methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group local robustness verification for neural networks
Dynamic mini-batch construction with similar computations
Adaptive identification of successful batch sizes
🔎 Similar Papers
No similar papers found.