🤖 AI Summary
This work addresses the challenge of effectively quantifying uncertainty in unknown functions—such as dynamics, rewards, or constraints—when deploying learning-based control methods in safety-critical systems, without relying on strong prior assumptions like function smoothness or norm bounds. The authors model the unknown function as a sampleable stochastic process and propose a purely data-driven scenario approach to construct high-probability uncertainty tubes that accommodate discontinuous functions. These tubes are integrated into a safe Bayesian optimization framework to enable automatic and safe tuning of control parameters. By eliminating the need for assumptions such as known Lipschitz constants, the method significantly enhances the applicability and robustness of safe learning-based control. Experimental validation on a real Furuta pendulum system demonstrates the approach’s effectiveness and practical feasibility.
📝 Abstract
Uncertainty quantification is essential when deploying learning-based control methods in safety-critical systems. This is commonly realized by constructing uncertainty tubes that enclose the unknown function of interest, e.g., the reward and constraint functions or the underlying dynamics model, with high probability. However, existing approaches for uncertainty quantification typically rely on restrictive assumptions on the unknown function, such as known bounds on functional norms or Lipschitz constants, and struggle with discontinuities. In this paper, we model the unknown function as a random function from which independent and identically distributed realizations can be generated, and construct uncertainty tubes via the scenario approach that hold with high probability and rely solely on the sampled realizations. We integrate these uncertainty tubes into a safe Bayesian optimization algorithm, which we then use to safely tune control parameters on a real Furuta pendulum.