Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization

πŸ“… 2024-05-27
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the formal robustness verification problem for Binary Neural Networks (BNNs) under β„“βˆž- and β„“Β²-norm adversarial perturbations. Unlike conventional NP-hard approaches based on Satisfiability Modulo Theories (SMT) or Mixed-Integer Linear Programming (MILP), we propose a novel, scalable, and numerically stable verification method. Our approach is the first to integrate sparse polynomial optimization with first-order semidefinite programming (SDP) relaxation, constructing tight continuous relaxations over the input space. This formulation substantially mitigates numerical instability and overcomes the scalability limitations of existing verifiers. Experimental evaluation on standard BNN architectures demonstrates that our method enables large-scale formal adversarial robustness verification, achieving superior efficiency and scalability compared to state-of-the-art techniques. The framework establishes a new paradigm for rigorous safety assessment of BNNs.

Technology Category

Application Category

πŸ“ Abstract
This paper explores methods for verifying the properties of Binary Neural Networks (BNNs), focusing on robustness against adversarial attacks. Despite their lower computational and memory needs, BNNs, like their full-precision counterparts, are also sensitive to input perturbations. Established methods for solving this problem are predominantly based on Satisfiability Modulo Theories and Mixed-Integer Linear Programming techniques, which are characterized by NP complexity and often face scalability issues. We introduce an alternative approach using Semidefinite Programming relaxations derived from sparse Polynomial Optimization. Our approach, compatible with continuous input space, not only mitigates numerical issues associated with floating-point calculations but also enhances verification scalability through the strategic use of tighter first-order semidefinite relaxations. We demonstrate the effectiveness of our method in verifying robustness against both $|.|_infty$ and $|.|_2$-based adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Verifying robustness of Binary Neural Networks against adversarial attacks
Addressing scalability issues in existing verification methods
Using sparse Polynomial Optimization for efficient property verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Semidefinite Programming relaxations
Leverages sparse Polynomial Optimization
Enhances scalability with tighter relaxations
πŸ”Ž Similar Papers
No similar papers found.
Jianting Yang
Jianting Yang
CNRS@CREATE LTD, Singapore
S
Srecko Ðurasinovic
CNRS@CREATE LTD, Singapore, College of Computing and Data Science, Nanyang Technological University
Jean B. Lasserre
Jean B. Lasserre
LAAS-CNRS & TSE, Toulouse France, ANITI
Semidefinite ProgrammingPolynomial OptimizationGlobal OptimizationConvex Optimization
Victor Magron
Victor Magron
CNRS
Polynomial optimizationquantum informationdynamical systemsdeep learningoptimal powerflow
J
Jun Zhao
College of Computing and Data Science, Nanyang Technological University