🤖 AI Summary
Formal verification of high-dimensional systems—e.g., 26-dimensional neural network controllers—is hindered by exponential complexity arising from state-space discretization. Method: We propose a novel framework combining convex autoencoders for verifiably safe dimensionality reduction, kernel regression to model latent-space dynamics, interval-based finite abstraction construction, and a backward-mapping verification propagation mechanism. Contribution/Results: This is the first approach to achieve *formally guaranteed latent-space abstraction*, wherein verification results in the reduced space are mathematically guaranteed to hold in the original high-dimensional system. Experiments demonstrate that our method significantly alleviates the “curse of dimensionality” while preserving full formal correctness—overcoming the scalability limitations of conventional discretization-based techniques.
📝 Abstract
Finite Abstraction methods provide a powerful formal framework for proving that systems satisfy their specifications. However, these techniques face scalability challenges for high-dimensional systems, as they rely on state-space discretization which grows exponentially with dimension. Learning-based approaches to dimensionality reduction, utilizing neural networks and autoencoders, have shown great potential to alleviate this problem. However, ensuring the correctness of the resulting verification results remains an open question. In this work, we provide a formal approach to reduce the dimensionality of systems via convex autoencoders and learn the dynamics in the latent space through a kernel-based method. We then construct a finite abstraction from the learned model in the latent space and guarantee that the abstraction contains the true behaviors of the original system. We show that the verification results in the latent space can be mapped back to the original system. Finally, we demonstrate the effectiveness of our approach on multiple systems, including a 26D system controlled by a neural network, showing significant scalability improvements without loss of rigor.