๐ค AI Summary
Existing neural contraction dynamical systems (NCDS) struggle to simultaneously support multi-task switching and implicit obstacle avoidance while ensuring safety for autonomous robotic control.
Method: We propose a conditional NCDS architecture that defines provably safe regions via Riemannian geometry and incorporates an uncertainty-driven latent-space obstacle-avoidance mechanism. The framework integrates conditional neural control, Riemannian metric regularization, and geometric constraints in the latent space, guaranteeing strict contraction stability under dynamic task transitions.
Results: Evaluated across multiple robotic simulation tasks, the method achieves end-to-end learning flexibility alongside formally verified stability. It enables real-time, adaptive task execution with implicit, reactive obstacle avoidanceโwithout compromising contraction guarantees. Experimental results demonstrate significant improvements in motion control safety and robustness compared to baseline NCDS approaches, particularly under uncertain environments and rapid task switching.
๐ Abstract
Stability guarantees are crucial when ensuring that a fully autonomous robot does not take undesirable or potentially harmful actions. We recently proposed the
Neural Contractive Dynamical Systems (NCDS)
, which is a neural network architecture that guarantees contractive stability. With this, learning-from-demonstrations approaches can trivially provide stability guarantees. However, our early work left several unanswered questions, which we here address. Beyond providing an in-depth explanation of NCDS, this paper extends the framework with more careful regularization, a conditional variant of the framework for handling multiple tasks, and an uncertainty-driven approach to latent obstacle avoidance. Experiments verify that the developed system has the flexibility of ordinary neural networks while providing the stability guarantees needed for autonomous robotics.