Towards Optimal Branching of Linear and Semidefinite Relaxations for Neural Network Robustness Certification

📅 2021-01-22
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the excessive relaxation error of linear programming (LP) and semidefinite programming (SDP) formulations in robustness certification of ReLU neural networks. We propose a branch-and-bound framework based on geometric partitioning of the input uncertainty set. Our key contributions are: (i) the first theoretical proof that intelligent partitioning can completely eliminate LP relaxation error for single-layer networks; (ii) a closed-form, layer-wise LP/SDP branching strategy for the NP-hard optimal partitioning problem, extended to an efficient multi-layer heuristic; and (iii) significant reduction of worst-case relaxation gaps, yielding substantial improvements in certified accuracy on MNIST, CIFAR-10, and a breast cancer classifier. We explicitly characterize the applicability boundaries of LP versus SDP branching, and our multi-layer approach achieves state-of-the-art certification performance on large-scale deep networks at the time of publication.
📝 Abstract
In this paper, we study certifying the robustness of ReLU neural networks against adversarial input perturbations. To diminish the relaxation error suffered by the popular linear programming (LP) and semidefinite programming (SDP) certification methods, we take a branch-and-bound approach to propose partitioning the input uncertainty set and solving the relaxations on each part separately. We show that this approach reduces relaxation error, and that the error is eliminated entirely upon performing an LP relaxation with a partition intelligently designed to exploit the nature of the ReLU activations. To scale this approach to large networks, we consider using a coarser partition whereby the number of parts in the partition is reduced. We prove that computing such a coarse partition that directly minimizes the LP relaxation error is NP-hard. By instead minimizing the worst-case LP relaxation error, we develop a closed-form branching scheme in the single-hidden layer case. We extend the analysis to the SDP, where the feasible set geometry is exploited to design a branching scheme that minimizes the worst-case SDP relaxation error. Experiments on MNIST, CIFAR-10, and Wisconsin breast cancer diagnosis classifiers demonstrate significant increases in the percentages of test samples certified. By independently increasing the input size and the number of layers, we empirically illustrate under which regimes the branched LP and branched SDP are best applied. Finally, we extend our LP branching method into a multi-layer branching heuristic, which attains comparable performance to prior state-of-the-art heuristics on large-scale, deep neural network certification benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Certify ReLU neural network robustness against adversarial perturbations
Reduce relaxation error in LP and SDP certification via branch-and-bound
Develop scalable branching schemes for large networks and multi-layer cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Branch-and-bound partitions input uncertainty set
Closed-form branching minimizes worst-case LP error
Geometry-based branching reduces SDP relaxation error
🔎 Similar Papers
No similar papers found.