Learning Safe Control via On-the-Fly Bandit Exploration

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address safety filter failure—i.e., infeasibility of admissible control inputs—under high model uncertainty, this paper proposes an online safe learning framework that operates without fallback controllers. When safety constraints become infeasible, the method leverages Control Barrier Functions (CBFs) to guide Gaussian process–driven bounded active exploration, enabling closed-loop data collection that incrementally improves safety certification. It is the first approach to achieve safe learning under zero-mean prior dynamics without requiring backup controllers. By tightly integrating CBF-based safety verification with Bayesian active exploration, the framework theoretically guarantees simultaneous improvement in closed-loop safety and filter feasibility. Crucially, it eliminates reliance on conservative error bounds or auxiliary controllers, delivering rigorous, adaptive safety guarantees even under severe model uncertainty.

Technology Category

Application Category

📝 Abstract
Control tasks with safety requirements under high levels of model uncertainty are increasingly common. Machine learning techniques are frequently used to address such tasks, typically by leveraging model error bounds to specify robust constraint-based safety filters. However, if the learned model uncertainty is very high, the corresponding filters are potentially invalid, meaning no control input satisfies the constraints imposed by the safety filter. While most works address this issue by assuming some form of safe backup controller, ours tackles it by collecting additional data on the fly using a Gaussian process bandit-type algorithm. We combine a control barrier function with a learned model to specify a robust certificate that ensures safety if feasible. Whenever infeasibility occurs, we leverage the control barrier function to guide exploration, ensuring the collected data contributes toward the closed-loop system safety. By combining a safety filter with exploration in this manner, our method provably achieves safety in a setting that allows for a zero-mean prior dynamics model, without requiring a backup controller. To the best of our knowledge, it is the first safe learning-based control method that achieves this.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in control tasks under high model uncertainty
Addressing infeasibility of safety filters via on-the-fly exploration
Achieving safe learning-based control without backup controllers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Gaussian process bandit for on-the-fly exploration
Combines control barrier function with learned model
Ensures safety without requiring backup controller
A
A. Capone
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
Ryan K. Cosner
Ryan K. Cosner
Assistant Professor, Tufts University
Nonlinear ControlMachine LearningRobotics
A
Aaaron Ames
Department of Mechanical and Civil Engineering, California Institute of Technology, Pasadena, CA, USA
Sandra Hirche
Sandra Hirche
Technical University of Munich