BaB-prob: Branch and Bound with Preactivation Splitting for Probabilistic Verification of Neural Networks

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of probabilistic verification for neural networks. We propose BaB-prob, the first branch-and-bound framework for probabilistic verification based on pre-activation splitting. Methodologically, we extend pre-activation splitting—previously used only in deterministic verification—to the probabilistic setting, enabling efficient computation of probabilistic upper and lower bounds for subproblems via linear bound propagation. To guide the search, we introduce an “uncertainty level” metric and design two splitting strategies: BaB-prob-ordered (order-based splitting) and BaBSR-prob (uncertainty-aware splitting). Experiments demonstrate that BaB-prob consistently outperforms state-of-the-art methods across untrained networks, MNIST, CIFAR-10, and the VNN-COMP 2025 benchmark. Notably, it achieves superior verification accuracy and scalability for medium- to high-dimensional inputs. By unifying probabilistic reasoning with scalable branch-and-bound optimization, BaB-prob establishes a new, principled, and extensible paradigm for probabilistic formal verification of neural networks.

Technology Category

Application Category

📝 Abstract
Branch-and-bound with preactivation splitting has been shown highly effective for deterministic verification of neural networks. In this paper, we extend this framework to the probabilistic setting. We propose BaB-prob that iteratively divides the original problem into subproblems by splitting preactivations and leverages linear bounds computed by linear bound propagation to bound the probability for each subproblem. We prove soundness and completeness of BaB-prob for feedforward-ReLU neural networks. Furthermore, we introduce the notion of uncertainty level and design two efficient strategies for preactivation splitting, yielding BaB-prob-ordered and BaB+BaBSR-prob. We evaluate BaB-prob on untrained networks, MNIST and CIFAR-10 models, respectively, and VNN-COMP 2025 benchmarks. Across these settings, our approach consistently outperforms state-of-the-art approaches in medium- to high-dimensional input problems.
Problem

Research questions and friction points this paper is trying to address.

Extends branch-and-bound framework to probabilistic neural network verification
Proposes iterative preactivation splitting to bound subproblem probabilities
Improves performance on high-dimensional input problems versus state-of-the-art
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends branch-and-bound with preactivation splitting to probabilistic verification
Uses linear bound propagation to compute probability bounds for subproblems
Introduces uncertainty level and efficient preactivation splitting strategies
🔎 Similar Papers
No similar papers found.
F
Fangji Wang
School of Aerospace Engineering, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30318, USA
Panagiotis Tsiotras
Panagiotis Tsiotras
Georgia Institute of Technology
controlsroboticsartificial intelligenceflying robotsspacecraft