E-Globe: Scalable $\epsilon$-Global Verification of Neural Networks via Tight Upper Bounds and Pattern-Aware Branching

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deploying neural networks in safety-critical applications, where robustness verification is hindered by the trade-off between scalability and completeness. The authors propose a hybrid verifier based on a branch-and-bound framework, which innovatively integrates a nonlinear programming upper-bounding method (NLP-CC) that preserves the input–output graph structure of ReLU activations. By combining pattern-aware strong branching, warm-start optimization, and GPU-accelerated batch processing, the method simultaneously tightens upper and lower bounds until ε-global optimality is achieved or early termination occurs. Experiments demonstrate that the approach yields tighter upper bounds than PGD on MNIST and CIFAR-10, achieves significantly faster node solving, and substantially outperforms existing mixed-integer programming (MIP) methods in end-to-end verification tasks.

Technology Category

Application Category

📝 Abstract
Neural networks achieve strong empirical performance, but robustness concerns still hinder deployment in safety-critical applications. Formal verification provides robustness guarantees, but current methods face a scalability-completeness trade-off. We propose a hybrid verifier in a branch-and-bound (BaB) framework that efficiently tightens both upper and lower bounds until an $\epsilon-$global optimum is reached or early stop is triggered. The key is an exact nonlinear program with complementarity constraints (NLP-CC) for upper bounding that preserves the ReLU input-output graph, so any feasible solution yields a valid counterexample and enables rapid pruning of unsafe subproblems. We further accelerate verification with (i) warm-started NLP solves requiring minimal constraint-matrix updates and (ii) pattern-aligned strong branching that prioritizes splits most effective at tightening relaxations. We also provide conditions under which NLP-CC upper bounds are tight. Experiments on MNIST and CIFAR-10 show markedly tighter upper bounds than PGD across perturbation radii spanning up to three orders of magnitude, fast per-node solves in practice, and substantial end-to-end speedups over MIP-based verification, amplified by warm-starting, GPU batching, and pattern-aligned branching.
Problem

Research questions and friction points this paper is trying to address.

neural network verification
scalability-completeness trade-off
robustness guarantees
formal verification
safety-critical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

NLP-CC
pattern-aware branching
ε-global verification
branch-and-bound
neural network verification