Counterexample Guided Branching via Directional Relaxation Analysis in Complete Neural Network Verification

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks are vulnerable to adversarial perturbations in safety-critical applications, necessitating efficient formal verification methods. Existing dataflow-based verification approaches fail to effectively leverage counterexamples generated during branch refinement. This work proposes the DRG-BaB framework, which, for the first time, treats counterexamples as precise indicators of local abstraction errors, reformulating branch-and-bound as a counterexample-guided abstraction refinement (CEGAR) loop. It introduces a Directional Relaxation Gap heuristic to prioritize the refinement of neurons responsible for spurious counterexamples. By shifting from blind search to goal-directed refinement, the method substantially reduces the search tree size and accelerates verification on high-dimensional benchmarks, outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks demonstrate exceptional performance but remain vulnerable to adversarial perturbations, necessitating formal verification for safety-critical deployment. To address the computational complexity of this task, researchers often employ abstraction-refinement techniques that iteratively tighten an over-approximated model. While structural methods utilize Counterexample-Guided Abstraction Refine- ment, state-of-the-art dataflow verifiers typically rely on Branch-and-Bound to refine numerical convex relaxations. However, current dataflow approaches operate with blind refinement processes that rely on static heuristics and fail to leverage specific diagnostic information from verification failures. In this work, we argue that Branch-and-Bound should be reformulated as a Dataflow CEGAR loop where the spurious counterexample serves as a precise witness to local abstraction errors. We propose DRG-BaB, a framework that introduces the Directional Relaxation Gap heuristic to prioritize branching on neurons actively contributing to falsification in the abstract domain. By deriving a closed-form spurious counterexample directly from linear bounds, our method transforms generic search into targeted refinement. Experiments on high-dimensional benchmarks demonstrate that this approach significantly reduces search tree size and verification time compared to established baselines.
Problem

Research questions and friction points this paper is trying to address.

Neural Network Verification
Branch-and-Bound
Abstraction Refinement
Counterexample-Guided
Adversarial Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterexample-Guided Abstraction Refinement
Branch-and-Bound
Directional Relaxation Gap
Neural Network Verification
Convex Relaxation
🔎 Similar Papers
No similar papers found.