ProSh: Probabilistic Shielding for Model-free Reinforcement Learning

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of formal safety guarantees in model-free reinforcement learning (RL) by proposing a risk-augmented probabilistic shielding framework for safe RL. The method constructs risk-budget-enhanced state representations and jointly trains a cost critic with the policy network, dynamically masking high-risk actions during training to ensure safety verification in expectation. It embeds safety constraints directly into the model-free paradigm, remains compatible with constrained Markov decision process (CMDP) modeling, and provides a theoretical upper bound on the expected cumulative safety cost throughout training. Experiments across diverse environments demonstrate effective control of expected safety costs while preserving policy optimality in deterministic settings. To our knowledge, this is the first approach to achieve verifiable, performance-preserving safety guarantees during training in model-free RL.

Technology Category

Application Category

📝 Abstract
Safety is a major concern in reinforcement learning (RL): we aim at developing RL systems that not only perform optimally, but are also safe to deploy by providing formal guarantees about their safety. To this end, we introduce Probabilistic Shielding via Risk Augmentation (ProSh), a model-free algorithm for safe reinforcement learning under cost constraints. ProSh augments the Constrained MDP state space with a risk budget and enforces safety by applying a shield to the agent's policy distribution using a learned cost critic. The shield ensures that all sampled actions remain safe in expectation. We also show that optimality is preserved when the environment is deterministic. Since ProSh is model-free, safety during training depends on the knowledge we have acquired about the environment. We provide a tight upper-bound on the cost in expectation, depending only on the backup-critic accuracy, that is always satisfied during training. Under mild, practically achievable assumptions, ProSh guarantees safety even at training time, as shown in the experiments.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in model-free reinforcement learning systems
Providing formal safety guarantees under cost constraints
Maintaining safety during training with probabilistic shielding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-free algorithm for safe reinforcement learning
Augments state space with risk budget
Applies shield to policy distribution using cost critic
🔎 Similar Papers
No similar papers found.
E
Edwin Hamel-De le Court
Imperial College London, United Kingdom
G
Gaspard Ohlmann
Mulhouse, France
Francesco Belardinelli
Francesco Belardinelli
Imperial College London
Artificial IntelligenceLogicFormal Methods