Exact and Asymptotically Complete Robust Verifications of Neural Networks via Quantum Optimization

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of deep neural networks to adversarial perturbations in safety-critical settings, where existing robustness verification methods struggle to balance efficiency and completeness. The paper introduces quantum optimization into neural network robustness verification for the first time, proposing an exact and complete verification model tailored to piecewise-linear activation functions. For general activation functions, it develops a scalable over-approximation scheme that is asymptotically complete, integrating quantum Benders decomposition, interval arithmetic, and an inter-layer hybrid quantum-classical solving strategy. Theoretical analysis provides certification transfer bounds, and experiments on standard benchmarks demonstrate high certified accuracy, establishing quantum optimization as an effective foundational approach for guaranteeing robustness in complex neural networks.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) enable high performance across domains but remain vulnerable to adversarial perturbations, limiting their use in safety-critical settings. Here, we introduce two quantum-optimization-based models for robust verification that reduce the combinatorial burden of certification under bounded input perturbations. For piecewise-linear activations (e.g., ReLU and hardtanh), our first model yields an exact formulation that is sound and complete, enabling precise identification of adversarial examples. For general activations (including sigmoid and tanh), our second model constructs scalable over-approximations via piecewise-constant bounds and is asymptotically complete, with approximation error vanishing as the segmentation is refined. We further integrate Quantum Benders Decomposition with interval arithmetic to accelerate solving, and propose certificate-transfer bounds that relate robustness guarantees of pruned networks to those of the original model. Finally, a layerwise partitioning strategy supports a quantum--classical hybrid workflow by coupling subproblems across depth. Experiments on robustness benchmarks show high certification accuracy, indicating that quantum optimization can serve as a principled primitive for robustness guarantees in neural networks with complex activations.
Problem

Research questions and friction points this paper is trying to address.

adversarial perturbations
robust verification
neural networks
safety-critical settings
bounded input perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantum optimization
robust verification
asymptotically complete
neural network certification
quantum-classical hybrid
🔎 Similar Papers
No similar papers found.