SHIELD: Safety on Humanoids via CBFs In Expectation on Learned Dynamics

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of ensuring real-time safety constraints for bipedal robots during dynamic locomotion—constraints that are typically difficult to guarantee without retraining the underlying learning-based controller. To this end, we propose SHIELD, a hierarchical safety framework that operates as a plug-and-play safety layer without modifying the original learned motion controller. SHIELD integrates a data-driven generative stochastic dynamics residual model with a novel “in-expectation” discrete-time control barrier function (CBF), enabling probabilistic and tunable safety guarantees. Uncertainty is modeled online using hardware-in-the-loop rollout data, and the minimal-intervention architecture ensures real-time deployability. We validate SHIELD on a Unitree G1 robot across diverse indoor and outdoor navigation scenarios, demonstrating robust obstacle avoidance and safe locomotion. The framework provides dynamic safety assurance for unknown or black-box reinforcement learning controllers—without requiring controller retraining—while maintaining high robustness and computational efficiency.

Technology Category

Application Category

📝 Abstract
Robot learning has produced remarkably effective ``black-box'' controllers for complex tasks such as dynamic locomotion on humanoids. Yet ensuring dynamic safety, i.e., constraint satisfaction, remains challenging for such policies. Reinforcement learning (RL) embeds constraints heuristically through reward engineering, and adding or modifying constraints requires retraining. Model-based approaches, like control barrier functions (CBFs), enable runtime constraint specification with formal guarantees but require accurate dynamics models. This paper presents SHIELD, a layered safety framework that bridges this gap by: (1) training a generative, stochastic dynamics residual model using real-world data from hardware rollouts of the nominal controller, capturing system behavior and uncertainties; and (2) adding a safety layer on top of the nominal (learned locomotion) controller that leverages this model via a stochastic discrete-time CBF formulation enforcing safety constraints in probability. The result is a minimally-invasive safety layer that can be added to the existing autonomy stack to give probabilistic guarantees of safety that balance risk and performance. In hardware experiments on an Unitree G1 humanoid, SHIELD enables safe navigation (obstacle avoidance) through varied indoor and outdoor environments using a nominal (unknown) RL controller and onboard perception.
Problem

Research questions and friction points this paper is trying to address.

Ensuring dynamic safety for learned black-box robot controllers
Bridging model-based safety guarantees with learned dynamics
Enabling probabilistic safety constraints without retraining policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative stochastic dynamics residual model training
Stochastic discrete-time CBF safety layer
Probabilistic safety guarantees for learned controllers
🔎 Similar Papers
No similar papers found.