🤖 AI Summary
Existing neural network local robustness verifiers suffer from high computational overhead and a trade-off between efficiency and precision, hindering scalable verification on large inputs. This paper proposes group-wise local robustness verification—the first approach to incorporate mini-batch mechanisms into robustness verification. It dynamically groups inputs whose behaviors within ε-balls are sufficiently similar and jointly verifies them, enabling analysis sharing and counterexample-guided refinement. The method supports both fully connected and convolutional networks, and comprises three core techniques: dynamic batch construction, adaptive batch sizing, and joint verification. Experiments on MNIST and CIFAR-10 demonstrate that our approach achieves an average 2.3× speedup over sample-wise verification (up to 4.1×), reducing total verification time from 24 hours to 6 hours—substantially improving efficiency without compromising verification accuracy.
📝 Abstract
Neural network image classifiers are ubiquitous in many safety-critical applications. However, they are susceptible to adversarial attacks. To understand their robustness to attacks, many local robustness verifiers have been proposed to analyze $ε$-balls of inputs. Yet, existing verifiers introduce a long analysis time or lose too much precision, making them less effective for a large set of inputs. In this work, we propose a new approach to local robustness: group local robustness verification. The key idea is to leverage the similarity of the network computations of certain $ε$-balls to reduce the overall analysis time. We propose BaVerLy, a sound and complete verifier that boosts the local robustness verification of a set of $ε$-balls by dynamically constructing and verifying mini-batches. BaVerLy adaptively identifies successful mini-batch sizes, accordingly constructs mini-batches of $ε$-balls that have similar network computations, and verifies them jointly. If a mini-batch is verified, all $ε$-balls are proven robust. Otherwise, one $ε$-ball is suspected as not being robust, guiding the refinement. In the latter case, BaVerLy leverages the analysis results to expedite the analysis of that $ε$-ball as well as the other $ε$-balls in the batch. We evaluate BaVerLy on fully connected and convolutional networks for MNIST and CIFAR-10. Results show that BaVerLy scales the common one by one verification by 2.3x on average and up to 4.1x, in which case it reduces the total analysis time from 24 hours to 6 hours.