Set-Based Training for Neural Network Verification

📅 2024-01-26
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of neural networks against input perturbations and the high computational cost of formal verification in safety-critical applications, this paper proposes a novel set-based training framework. Our method introduces— for the first time—the concept of “gradient sets,” enabling joint propagation of input sets and gradient sets to dynamically shrink and center the output envelope. By unifying robustness optimization and formal verifiability enhancement within the training process, it breaks the conventional separation between training and verification. Experiments demonstrate that models trained with our framework maintain high accuracy while significantly reducing output set diameters; consequently, polynomial-time verification algorithms become scalable to medium-sized networks for the first time. The approach thus achieves both strong certified robustness and efficient formal verifiability.

Technology Category

Application Category

📝 Abstract
Neural networks are vulnerable to adversarial attacks, i.e., small input perturbations can significantly affect the outputs of a neural network. Therefore, to ensure safety of safety-critical environments, the robustness of a neural network must be formally verified against input perturbations, e.g., from noisy sensors. To improve the robustness of neural networks and thus simplify the formal verification, we present a novel set-based training procedure in which we compute the set of possible outputs given the set of possible inputs and compute for the first time a gradient set, i.e., each possible output has a different gradient. Therefore, we can directly reduce the size of the output enclosure by choosing gradients toward its center. Small output enclosures increase the robustness of a neural network and, at the same time, simplify its formal verification. The latter benefit is due to the fact that a larger size of propagated sets increases the conservatism of most verification methods. Our extensive evaluation demonstrates that set-based training produces robust neural networks with competitive performance, which can be verified using fast (polynomial-time) verification algorithms due to the reduced output set.
Problem

Research questions and friction points this paper is trying to address.

Neural Networks
Robustness
Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble Training
Neural Network Robustness
Gradient Adjustment
🔎 Similar Papers
No similar papers found.