🤖 AI Summary
Neural networks are vulnerable to adversarial perturbations in safety-critical applications, necessitating efficient formal verification methods. Existing dataflow-based verification approaches fail to effectively leverage counterexamples generated during branch refinement. This work proposes the DRG-BaB framework, which, for the first time, treats counterexamples as precise indicators of local abstraction errors, reformulating branch-and-bound as a counterexample-guided abstraction refinement (CEGAR) loop. It introduces a Directional Relaxation Gap heuristic to prioritize the refinement of neurons responsible for spurious counterexamples. By shifting from blind search to goal-directed refinement, the method substantially reduces the search tree size and accelerates verification on high-dimensional benchmarks, outperforming state-of-the-art baselines.
📝 Abstract
Deep Neural Networks demonstrate exceptional performance but remain vulnerable to adversarial perturbations, necessitating formal verification for safety-critical deployment. To address the computational complexity of this task, researchers often employ abstraction-refinement techniques that iteratively tighten an over-approximated model. While structural methods utilize Counterexample-Guided Abstraction Refine- ment, state-of-the-art dataflow verifiers typically rely on Branch-and-Bound to refine numerical convex relaxations. However, current dataflow approaches operate with blind refinement processes that rely on static heuristics and fail to leverage specific diagnostic information from verification failures. In this work, we argue that Branch-and-Bound should be reformulated as a Dataflow CEGAR loop where the spurious counterexample serves as a precise witness to local abstraction errors. We propose DRG-BaB, a framework that introduces the Directional Relaxation Gap heuristic to prioritize branching on neurons actively contributing to falsification in the abstract domain. By deriving a closed-form spurious counterexample directly from linear bounds, our method transforms generic search into targeted refinement. Experiments on high-dimensional benchmarks demonstrate that this approach significantly reduces search tree size and verification time compared to established baselines.