Fully Automatic Neural Network Reduction for Formal Verification

📅 2023-05-03
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Formal verification of neural networks in safety-critical applications faces severe scalability bottlenecks. Method: This paper proposes an automatic, provably sound dynamic reduction technique grounded in online reachability analysis. It is the first method to achieve strict property-preserving reduction for arbitrary element-wise activation functions (e.g., ReLU, sigmoid, tanh), specifically tailored for CNNs by exploiting local pixel similarity to enhance compression efficiency. Contribution/Results: By co-optimizing reduction, parameter adaptation, and formal verification, the approach reduces neuron count to a tiny fraction of the original network while maintaining bounded outer-approximation error. Verification time decreases proportionally. Consequently, large-scale networks become practically verifiable under stringent timing constraints—significantly advancing feasibility for real-world safety-critical deployment.
📝 Abstract
Formal verification of neural networks is essential before their deployment in safety-critical applications. However, existing methods for formally verifying neural networks are not yet scalable enough to handle practical problems involving a large number of neurons. We address this challenge by introducing a fully automatic and sound reduction of neural networks using reachability analysis. The soundness ensures that the verification of the reduced network entails the verification of the original network. To the best of our knowledge, we present the first sound reduction approach that is applicable to neural networks with any type of element-wise activation function, such as ReLU, sigmoid, and tanh. The network reduction is computed on the fly while simultaneously verifying the original network and its specifications. All parameters are automatically tuned to minimize the network size without compromising verifiability. We further show the applicability of our approach to convolutional neural networks by explicitly exploiting similar neighboring pixels. Our evaluation shows that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation and thus reduce the verification time to a similar degree.
Problem

Research questions and friction points this paper is trying to address.

Scalable formal verification of neural networks
Automatic reduction of large neural networks
Faster verification with minimized network size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fully automatic neural network reduction using reachability analysis
Applicable to networks with any element-wise activation function
Reduces network size without compromising verifiability
🔎 Similar Papers
No similar papers found.