π€ AI Summary
Existing reachability analysis methods typically rely on known dynamics models, full state information, or large datasets, making them ill-suited for scenarios involving only high-dimensional visual observations. This work proposes a state-free topological reachability analysis approach that leverages deep representation learning to extract a low-dimensional latent space from image trajectories. By integrating topological data analysis with Morse graph theory, the method estimates the domain of attraction for controlled systems directly in this latent space. Notably, this study extends the MORALS framework to the purely vision-based setting for the first time, successfully generating coherent Morse graphs across multiple dynamical systems and controllers. The resulting domain-of-attraction estimates achieve accuracy comparable to the original MORALS method while substantially reducing reliance on explicit state observations.
π Abstract
Reachability analysis has become increasingly important in robotics to distinguish safe from unsafe states. Unfortunately, existing reachability and safety analysis methods often fall short, as they typically require known system dynamics or large datasets to estimate accurate system models, are computationally expensive, and assume full state information. A recent method, called MORALS, aims to address these shortcomings by using topological tools to estimate3DR-eEgnciodnesr of Attraction (ROA) in a low-dimensional latent space. However, MORALS still relies on full state knowledge and has not been studied when only sensor measurements are available. This paper presents Visual Morse Graph-Aided Estimation of Regions of Attraction in a Learned Latent Space (V- MORALS). V-MORALS takes in a dataset of image-based trajectories of a system under a given controller, and learns a latent space for reachability analysis. Using this learned latent space, our method is able to generate well-defined Morse Graphs, from which we can compute ROAs for various systems and controllers. V-MORALS provides capabilities similar to the original MORALS architecture without relying on state knowledge, and using only high-level sensor data. Our project website is at: https://v-morals.onrender.com.