🤖 AI Summary
Generative motion planners (GMPs) lack formal guarantees on safety and dynamic feasibility of their outputs, primarily due to the intractability of applying existing neural network verification (NNV) tools—designed for small-scale networks—to million-parameter GMPs.
Method: We propose a “verifiable reference trajectory + closed-loop verification” paradigm: (i) offline synthesis of a compact, locally stable neural tracking controller trained to follow sampled reference trajectories from the GMP; (ii) online formal safety verification of the closed-loop system via reachability analysis. The original GMP remains unmodified.
Contributions/Results: Our framework integrates diffusion models or flow matching for trajectory generation, vision-language models for semantic planning guidance, learning-based tracking control, and scalable NNV. Evaluated in simulation and on physical ground robots and quadrotors, it significantly enhances safety and dynamic feasibility guarantees for diverse GMPs without sacrificing expressiveness.
📝 Abstract
We present a method for formal safety verification of learning-based generative motion planners. Generative motion planners (GMPs) offer advantages over traditional planners, but verifying the safety and dynamic feasibility of their outputs is difficult since neural network verification (NNV) tools scale only to a few hundred neurons, while GMPs often contain millions. To preserve GMP expressiveness while enabling verification, our key insight is to imitate the GMP by stabilizing references sampled from the GMP with a small neural tracking controller and then applying NNV to the closed-loop dynamics. This yields reachable sets that rigorously certify closed-loop safety, while the controller enforces dynamic feasibility. Building on this, we construct a library of verified GMP references and deploy them online in a way that imitates the original GMP distribution whenever it is safe to do so, improving safety without retraining. We evaluate across diverse planners, including diffusion, flow matching, and vision-language models, improving safety in simulation (on ground robots and quadcopters) and on hardware (differential-drive robot).