Tightening Robustness Verification of MaxPool-based Neural Networks via Minimizing the Over-Approximation Zone

📅 2022-11-13
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural network robustness verifiers suffer from severe over-approximation and low certification accuracy for MaxPool layers. Method: We propose Ti-Lin, the first verifier to derive theoretically optimal linear upper and lower bounds for individual MaxPool neurons—yielding provably tight neuron-level linear approximations—and integrate them into a hybrid abstract interpretation framework combining compact linear relaxations with mixed-integer linear programming (Hybrid-Lin). Results: Evaluated on LeNet and PointNet across MNIST, CIFAR-10, Tiny ImageNet, and ModelNet40, Ti-Lin improves certified accuracy by up to 78.6% over state-of-the-art tools, while maintaining verification time comparable to the fastest existing verifier. This significantly enhances both the practicality and reliability of provable robustness certification for MaxPool-based networks.
📝 Abstract
The robustness of neural network classifiers is important in the safety-critical domain and can be quantified by robustness verification. At present, efficient and scalable verification techniques are always sound but incomplete, and thus, the improvement of verified robustness results is the key criterion to evaluate the performance of incomplete verification approaches. The multi-variate function MaxPool is widely adopted yet challenging to verify. In this paper, we present Ti-Lin, a robustness verifier for MaxPool-based CNNs with Tight Linear Approximation. Following the sequel of minimizing the over-approximation zone of the non-linear function of CNNs, we are the first to propose the provably neuron-wise tightest linear bounds for the MaxPool function. By our proposed linear bounds, we can certify larger robustness results for CNNs. We evaluate the effectiveness of Ti-Lin on different verification frameworks with open-sourced benchmarks, including LeNet, PointNet, and networks trained on the MNIST, CIFAR-10, Tiny ImageNet and ModelNet40 datasets. Experimental results show that Ti-Lin significantly outperforms the state-of-the-art methods across all networks with up to 78.6% improvement in terms of the certified accuracy with almost the same time consumption as the fastest tool. Our code is available at https://github.com/xiaoyuanpigo/Ti-Lin-Hybrid-Lin.
Problem

Research questions and friction points this paper is trying to address.

Improving robustness verification for MaxPool-based neural networks
Minimizing over-approximation zones in CNN nonlinear functions
Providing tightest linear bounds for MaxPool functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tight Linear Approximation for MaxPool
Neuron-wise tightest linear bounds
Improved certified robustness accuracy
🔎 Similar Papers
No similar papers found.
Yuan Xiao
Yuan Xiao
ShanghaiTech University
Computer Security
Yuchen Chen
Yuchen Chen
assistant professor of communication studies at CUNY, baruch college
chinadigital studiesSTS
Shiqing Ma
Shiqing Ma
University of Massachusetts, Amherst
SecurityAISE
Chunrong Fang
Chunrong Fang
Software Institute, Nanjing University
Software TestingSoftware EngineeringComputer Science
T
Tongtong Bai
State Key Laboratory for Novel Software Technology, Nanjing University, China
M
Mingzheng Gu
State Key Laboratory for Novel Software Technology, Nanjing University, China
Y
Yuxin Cheng
State Key Laboratory for Novel Software Technology, Nanjing University, China
Y
Yanwei Chen
State Key Laboratory for Novel Software Technology, Nanjing University, China
Z
Zhenyu Chen
State Key Laboratory for Novel Software Technology, Nanjing University, China; Shenzhen Research Institute, Nanjing University, China