ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning

📅 2024-10-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address safety risks arising from trial-and-error exploration during real-world deployment of reinforcement learning, this paper proposes a safe and efficient model-based active exploration framework. Methodologically, it introduces the first unified formulation integrating optimistic planning—designed to handle dynamical uncertainty—with pessimistic constraints—intended to account for uncertainty in safety constraints. The framework combines Gaussian process and deep probabilistic models for dynamics learning, uncertainty-aware model predictive control, and integrated optimization with safety barrier functions. It provides theoretical guarantees of strict safety satisfaction throughout execution and convergence to near-optimal policies. Evaluated on standard safe deep RL benchmarks, the approach achieves state-of-the-art performance, supports high-dimensional vision-based control tasks, and incurs zero safety violations across all experiments.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) is ubiquitous in the development of modern AI systems. However, state-of-the-art RL agents require extensive, and potentially unsafe, interactions with their environments to learn effectively. These limitations confine RL agents to simulated environments, hindering their ability to learn directly in real-world settings. In this work, we present ActSafe, a novel model-based RL algorithm for safe and efficient exploration. ActSafe learns a well-calibrated probabilistic model of the system and plans optimistically w.r.t. the epistemic uncertainty about the unknown dynamics, while enforcing pessimism w.r.t. the safety constraints. Under regularity assumptions on the constraints and dynamics, we show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time. In addition, we propose a practical variant of ActSafe that builds on latest model-based RL advancements and enables safe exploration even in high-dimensional settings such as visual control. We empirically show that ActSafe obtains state-of-the-art performance in difficult exploration tasks on standard safe deep RL benchmarks while ensuring safety during learning.
Problem

Research questions and friction points this paper is trying to address.

Ensures safe exploration in RL
Optimizes learning in real-world settings
Achieves near-optimal policy safely
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based RL algorithm
Safe and efficient exploration
High-dimensional visual control