🤖 AI Summary
Neural networks deployed in safety-critical applications are vulnerable to small input perturbations, and their formal verification often fails due to the conservatism of conventional over-approximation methods (e.g., zonotopes). To address this, we propose the *Invariant Latent Space* framework: for the first time, we back-propagate output specifications into the input space and construct a unified high-dimensional representation based on projected zonotopes. This enables inputs and outputs to be represented as interactive “shadows” within the same space, supporting cross-space constraint propagation and iterative refinement. The method relies solely on matrix operations—naturally amenable to GPU acceleration—and is integrated into a branch-and-bound solver. Evaluated on the VNN-COMP’24 benchmark, our approach significantly reduces the number of subproblems and achieves verification efficiency competitive with state-of-the-art tools.
📝 Abstract
Neural networks are ubiquitous. However, they are often sensitive to small input changes. Hence, to prevent unexpected behavior in safety-critical applications, their formal verification -- a notoriously hard problem -- is necessary. Many state-of-the-art verification algorithms use reachability analysis or abstract interpretation to enclose the set of possible outputs of a neural network. Often, the verification is inconclusive due to the conservatism of the enclosure. To address this problem, we design a novel latent space for formal verification that enables the transfer of output specifications to the input space for an iterative specification-driven input refinement, i.e., we iteratively reduce the set of possible inputs to only enclose the unsafe ones. The latent space is constructed from a novel view of projection-based set representations, e.g., zonotopes, which are commonly used in reachability analysis of neural networks. A projection-based set representation is a"shadow"of a higher-dimensional set -- a latent space -- that does not change during a set propagation through a neural network. Hence, the input set and the output enclosure are"shadows"of the same latent space that we can use to transfer constraints. We present an efficient verification tool for neural networks that uses our iterative refinement to significantly reduce the number of subproblems in a branch-and-bound procedure. Using zonotopes as a set representation, unlike many other state-of-the-art approaches, our approach can be realized by only using matrix operations, which enables a significant speed-up through efficient GPU acceleration. We demonstrate that our tool achieves competitive performance, which would place it among the top-ranking tools of the last neural network verification competition (VNN-COMP'24).