Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) is a powerful framework for optimal decision-making and control but often lacks provable guarantees for safety-critical applications. In this paper, we introduce a novel recovery-based shielding framework that enables safe RL with a provable safety lower bound for unknown and non-linear continuous dynamical systems. The proposed approach integrates a backup policy (shield) with the RL agent, leveraging Gaussian process (GP) based uncertainty quantification to predict potential violations of safety constraints, dynamically recovering to safe trajectories only when necessary. Experience gathered by the'shielded'agent is used to construct the GP models, with policy optimization via internal model-based sampling - enabling unrestricted exploration and sample efficient learning, without compromising safety. Empirically our approach demonstrates strong performance and strict safety-compliance on a suite of continuous control environments.
Problem

Research questions and friction points this paper is trying to address.

Safe Reinforcement Learning
Safety Guarantees
Continuous Dynamical Systems
Safety Constraints
Recovery-based Shielding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recovery-based Shielding
Gaussian Process Dynamics
Safe Reinforcement Learning
Uncertainty Quantification
Model-based Sampling
🔎 Similar Papers
No similar papers found.
A
Alexander W. Goodall
Imperial College London, Department of Computing
Francesco Belardinelli
Francesco Belardinelli
Imperial College London
Artificial IntelligenceLogicFormal Methods