CBF-RL: Safety Filtering Reinforcement Learning in Training with Control Barrier Functions

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) often neglects safety considerations, while online safety filters—such as control barrier functions (CBFs)—tend to induce overly conservative policies. Method: This paper introduces CBF-RL, the first framework that explicitly embeds CBF-based safety constraints into the RL training process, enabling the policy to autonomously internalize safe behavior during learning and eliminating runtime dependence on online safety filters. The method ensures closed-loop safety guarantees in discrete-time settings, enhancing both exploration safety and convergence speed. Contribution/Results: Evaluated on simulated navigation tasks and the Unitree G1 humanoid robot, CBF-RL achieves stable obstacle avoidance and stair climbing without online filtering. It significantly improves training efficiency and robustness under uncertainty, demonstrating superior performance over conventional RL and filtered baselines.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL), while powerful and expressive, can often prioritize performance at the expense of safety. Yet safety violations can lead to catastrophic outcomes in real-world deployments. Control Barrier Functions (CBFs) offer a principled method to enforce dynamic safety -- traditionally deployed emph{online} via safety filters. While the result is safe behavior, the fact that the RL policy does not have knowledge of the CBF can lead to conservative behaviors. This paper proposes CBF-RL, a framework for generating safe behaviors with RL by enforcing CBFs emph{in training}. CBF-RL has two key attributes: (1) minimally modifying a nominal RL policy to encode safety constraints via a CBF term, (2) and safety filtering of the policy rollouts in training. Theoretically, we prove that continuous-time safety filters can be deployed via closed-form expressions on discrete-time roll-outs. Practically, we demonstrate that CBF-RL internalizes the safety constraints in the learned policy -- both enforcing safer actions and biasing towards safer rewards -- enabling safe deployment without the need for an online safety filter. We validate our framework through ablation studies on navigation tasks and on the Unitree G1 humanoid robot, where CBF-RL enables safer exploration, faster convergence, and robust performance under uncertainty, enabling the humanoid robot to avoid obstacles and climb stairs safely in real-world settings without a runtime safety filter.
Problem

Research questions and friction points this paper is trying to address.

Ensuring reinforcement learning safety without conservative behavior
Integrating Control Barrier Functions into RL training process
Enabling safe robot deployment without runtime safety filters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Control Barrier Functions into training phase
Modifies RL policy minimally with CBF safety constraints
Filters policy rollouts during training for safety
🔎 Similar Papers
No similar papers found.