🤖 AI Summary
In neural network verification, semidefinite programming (SDP) relaxations for ReLU networks suffer from loss of strict feasibility—termed “interior-point disappearance”—as network depth increases, severely degrading numerical stability and optimality, thereby imposing a fundamental scalability bottleneck. This work formally characterizes this phenomenon for the first time and reveals that conventional ReLU bound constraints are not only ineffective but actively exacerbate feasibility degradation. To address this, we propose five novel feasibility-enhancing strategies, integrating convex relaxation analysis, theoretical feasibility proofs, and an empirical validation framework. Experiments demonstrate that our methods successfully verify 88% of instances on which standard SDP-based verification fails, covering 41% of all tested problems. The proposed approach significantly improves both robustness and scalability of verification for deep ReLU networks.
📝 Abstract
Semidefinite programming (SDP) relaxation has emerged as a promising approach for neural network verification, offering tighter bounds than other convex relaxation methods for deep neural networks (DNNs) with ReLU activations. However, we identify a critical limitation in the SDP relaxation when applied to deep networks: interior-point vanishing, which leads to the loss of strict feasibility -- a crucial condition for the numerical stability and optimality of SDP. Through rigorous theoretical and empirical analysis, we demonstrate that as the depth of DNNs increases, the strict feasibility is likely to be lost, creating a fundamental barrier to scaling SDP-based verification. To address the interior-point vanishing, we design and investigate five solutions to enhance the feasibility conditions of the verification problem. Our methods can successfully solve 88% of the problems that could not be solved by existing methods, accounting for 41% of the total. Our analysis also reveals that the valid constraints for the lower and upper bounds for each ReLU unit are traditionally inherited from prior work without solid reasons, but are actually not only unbeneficial but also even harmful to the problem's feasibility. This work provides valuable insights into the fundamental challenges of SDP-based DNN verification and offers practical solutions to improve its applicability to deeper neural networks, contributing to the development of more reliable and secure systems with DNNs.