Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional deep neural network (DNN) formal verification employs binary safety classification (safe/unsafe), lacking fine-grained quantification of *degree* of unsafety. Method: This paper introduces a novel abstract DNN verification paradigm that hierarchically models the unsafe structure of the output reachable set, enabling multi-level robustness quantification. Grounded in abstract interpretation theory, the approach integrates interval analysis with symbolic propagation to construct a scalable, hierarchical safety abstraction framework. Contribution/Results: We theoretically prove its computational complexity is no higher than that of classical methods and establish its intrinsic consistency with abstract-interpretation-based robustness verification. Experiments on the Habitat 3.0 reinforcement learning task and standard DNN verification benchmarks demonstrate that the method precisely ranks adversarial examples by severity of safety violation, significantly enhancing the interpretability and practical utility of verification outcomes.

Technology Category

Application Category

📝 Abstract
Traditional methods for formal verification (FV) of deep neural networks (DNNs) are constrained by a binary encoding of safety properties, where a model is classified as either safe or unsafe (robust or not robust). This binary encoding fails to capture the nuanced safety levels within a model, often resulting in either overly restrictive or too permissive requirements. In this paper, we introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs, providing a more granular analysis of the safety aspect for a given DNN. Crucially, by leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the FV process, requiring the same (in the worst case) or even potentially less computational effort than the traditional binary verification approach. Specifically, we demonstrate how this formulation allows rank adversarial inputs according to their abstract safety level violation, offering a more detailed evaluation of the model's safety and robustness. Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches that employ abstract interpretation for robustness verification, complexity analysis of the novel problem introduced, and an empirical evaluation considering both a complex deep reinforcement learning task (based on Habitat 3.0) and standard DNN-Verification benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Hierarchical safety verification for nuanced DNN robustness analysis
Abstract DNN-Verification with multiple safety level assessment
Ranking adversarial inputs by abstract safety violation severity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical safety abstract interpretation for DNNs
Abstract DNN-Verification with granular safety analysis
Rank adversarial inputs by abstract safety levels
🔎 Similar Papers
No similar papers found.