Efficient Neural Network Verification via Order Leading Exploration of Branch-and-Bound Trees

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Branch-and-bound (BaB) methods for neural network robustness verification suffer from inefficient exploration due to their default “first-come-first-served” subproblem scheduling, leading to slow counterexample discovery. Method: This paper proposes Oliva, an ordered subproblem scheduling framework that prioritizes subproblems more likely to contain counterexamples, thereby accelerating verification. Its core innovations are two complementary priority strategies: gradient-based greedy ordering (Oliva<sup>GR</sup>) and simulated-annealing-inspired randomized exploration (Oliva<sup>SA</sup>), jointly balancing exploration and exploitation while preserving completeness. Oliva is solver-agnostic and integrates seamlessly with mainstream verifiers without modifying underlying solvers. Results: Experiments on MNIST and CIFAR-10 demonstrate that Oliva achieves up to 25× and 80× speedup over state-of-the-art methods, respectively, significantly improving verification efficiency.

Technology Category

Application Category

📝 Abstract
The vulnerability of neural networks to adversarial perturbations has necessitated formal verification techniques that can rigorously certify the quality of neural networks. As the state-of-the-art, branch and bound (BaB) is a "divide-and-conquer" strategy that applies off-the-shelf verifiers to sub-problems for which they perform better. While BaB can identify the sub-problems that are necessary to be split, it explores the space of these sub-problems in a naive "first-come-first-serve" manner, thereby suffering from an issue of inefficiency to reach a verification conclusion. To bridge this gap, we introduce an order over different sub-problems produced by BaB, concerning with their different likelihoods of containing counterexamples. Based on this order, we propose a novel verification framework Oliva that explores the sub-problem space by prioritizing those sub-problems that are more likely to find counterexamples, in order to efficiently reach the conclusion of the verification. Even if no counterexample can be found in any sub-problem, it only changes the order of visiting different sub-problem and so will not lead to a performance degradation. Specifically, Oliva has two variants, including $Oliva^{GR}$, a greedy strategy that always prioritizes the sub-problems that are more likely to find counterexamples, and $Oliva^{SA}$, a balanced strategy inspired by simulated annealing that gradually shifts from exploration to exploitation to locate the globally optimal sub-problems. We experimentally evaluate the performance of Oliva on 690 verification problems spanning over 5 models with datasets MNIST and CIFAR10. Compared to the state-of-the-art approaches, we demonstrate the speedup of Oliva for up to 25X in MNIST, and up to 80X in CIFAR10.
Problem

Research questions and friction points this paper is trying to address.

Improves neural network verification via ordered BaB exploration
Prioritizes sub-problems likely to contain counterexamples for efficiency
Achieves significant speedup in verification compared to state-of-the-art methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prioritizes sub-problems likely with counterexamples
Introduces order in branch-and-bound exploration
Offers greedy and balanced strategy variants
🔎 Similar Papers
No similar papers found.
Guanqin Zhang
Guanqin Zhang
Ph.D. Candidate at University of New South Wales
K
Kota Fukuda
Kyushu University, Fukuoka, Japan
Zhenya Zhang
Zhenya Zhang
Kyushu University
Formal methodsHybrid systemsTemporal logicNeural network verification
H
H. M. N. Dilum Bandara
CSIRO’s Data61, Sydney, Australia; University of New South Wales, Sydney, Australia
S
Shiping Chen
CSIRO’s Data61, Sydney, Australia; University of New South Wales, Sydney, Australia
Jianjun Zhao
Jianjun Zhao
Kyushu University
Software EngineeringProgramming Languages
Yulei Sui
Yulei Sui
University of New South Wales (UNSW Sydney)
Static Program AnalysisSecure Software EngineeringAI4SESE4AI