🤖 AI Summary
This work addresses the inefficiency of neural network verification methods that redundantly explore identical infeasible regions when handling related queries. To overcome this limitation, the authors propose an incremental verification approach that safely reuses conflicts—i.e., infeasible combinations of activation phases—across queries within a branch-and-bound framework. By formally characterizing refinement relationships between queries, they prove that conflicts remain valid under refinement and integrate SAT-based consistency checking and propagation to prune infeasible subproblems early. This method is the first to enable cross-query inheritance and reuse of conflict information in neural network verification. Implemented in Marabou, it achieves up to a 1.9× speedup on tasks including local robustness radius computation, input partitioning verification, and minimal sufficient feature set extraction.
📝 Abstract
Neural network verification is often used as a core component within larger analysis procedures, which generate sequences of closely related verification queries over the same network. In existing neural network verifiers, each query is typically solved independently, and information learned during previous runs is discarded, leading to repeated exploration of the same infeasible regions of the search space. In this work, we aim to expedite verification by reducing this redundancy. We propose an incremental verification technique that reuses learned conflicts across related verification queries. The technique can be added on top of any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases, and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search. We implement the proposed technique in the Marabou verifier and evaluate it on three verification tasks: local robustness radius determination, verification with input splitting, and minimal sufficient feature set extraction. Our experiments show that incremental conflict reuse reduces verification effort and yields speedups of up to $1.9\times$ over a non-incremental baseline.