๐ค AI Summary
Robot path-following controllers struggle to simultaneously achieve adaptability and formal safety guarantees in complex, dynamic environments. Method: This paper proposes a synergistic framework integrating reinforcement learning (RL) with a high-assurance deterministic controller, built upon the Simplex architecture. It systematically establishes stability and safety design principles for Simplex-based path following and introduces the first RL path-following controller with formally verified safe switching, incorporating safety-critical state monitoring and real-time switching mechanisms. Contribution/Results: Simulation and preliminary experimental results demonstrate that the controller strictly satisfies motion safety constraints while achieving state-of-the-art tracking performance. Moreover, it significantly enhances robustness and trustworthiness under dynamic environmental conditions.
๐ Abstract
Robot navigation in complex environments necessitates controllers that are adaptive and safe. Traditional controllers like Regulated Pure Pursuit, Dynamic Window Approach, and Model-Predictive Path Integral, while reliable, struggle to adapt to dynamic conditions. Reinforcement Learning offers adaptability but lacks formal safety guarantees. To address this, we propose a path tracking controller leveraging the Simplex architecture. It combines a Reinforcement Learning controller for adaptiveness and performance with a high-assurance controller providing safety and stability. Our contribution is twofold. We firstly discuss general stability and safety considerations for designing controllers using the Simplex architecture. Secondly, we present a Simplex-based path tracking controller. Our simulation results, supported by preliminary in-field tests, demonstrate the controller's effectiveness in maintaining safety while achieving comparable performance to state-of-the-art methods.