π€ AI Summary
This work addresses the formal robustness verification problem for Binary Neural Networks (BNNs) under ββ- and βΒ²-norm adversarial perturbations. Unlike conventional NP-hard approaches based on Satisfiability Modulo Theories (SMT) or Mixed-Integer Linear Programming (MILP), we propose a novel, scalable, and numerically stable verification method. Our approach is the first to integrate sparse polynomial optimization with first-order semidefinite programming (SDP) relaxation, constructing tight continuous relaxations over the input space. This formulation substantially mitigates numerical instability and overcomes the scalability limitations of existing verifiers. Experimental evaluation on standard BNN architectures demonstrates that our method enables large-scale formal adversarial robustness verification, achieving superior efficiency and scalability compared to state-of-the-art techniques. The framework establishes a new paradigm for rigorous safety assessment of BNNs.
π Abstract
This paper explores methods for verifying the properties of Binary Neural Networks (BNNs), focusing on robustness against adversarial attacks. Despite their lower computational and memory needs, BNNs, like their full-precision counterparts, are also sensitive to input perturbations. Established methods for solving this problem are predominantly based on Satisfiability Modulo Theories and Mixed-Integer Linear Programming techniques, which are characterized by NP complexity and often face scalability issues. We introduce an alternative approach using Semidefinite Programming relaxations derived from sparse Polynomial Optimization. Our approach, compatible with continuous input space, not only mitigates numerical issues associated with floating-point calculations but also enhances verification scalability through the strategic use of tighter first-order semidefinite relaxations. We demonstrate the effectiveness of our method in verifying robustness against both $|.|_infty$ and $|.|_2$-based adversarial attacks.