🤖 AI Summary
This paper studies safe online reinforcement learning for controlling an unknown one-dimensional linear dynamical system, requiring the system state to remain within a given safety region with high probability at all times while achieving optimal regret. To this end, we propose the first safe online Linear Quadratic Regulator (LQR) algorithm with rigorous theoretical guarantees. Our method employs a truncated linear controller—a structurally simple yet nonlinear policy baseline—and establishes the continuity of its optimal parameters with respect to the safety boundary, revealing a constraint-induced acceleration in learning. By integrating high-probability safety-constrained design, robust parameter estimation, and stability analysis, we achieve, for the first time, a $ ilde{O}(sqrt{T})$ regret bound—matching the fundamental lower bound of unconstrained LQR and strictly improving upon existing safe RL algorithms, all while ensuring strict, global safety throughout the entire learning horizon.
📝 Abstract
Understanding how to efficiently learn while adhering to safety constraints is essential for using online reinforcement learning in practical applications. However, proving rigorous regret bounds for safety-constrained reinforcement learning is difficult due to the complex interaction between safety, exploration, and exploitation. In this work, we seek to establish foundations for safety-constrained reinforcement learning by studying the canonical problem of controlling a one-dimensional linear dynamical system with unknown dynamics. We study the safety-constrained version of this problem, where the state must with high probability stay within a safe region, and we provide the first safe algorithm that achieves regret of $ ilde{O}_T(sqrt{T})$. Furthermore, the regret is with respect to the baseline of truncated linear controllers, a natural baseline of non-linear controllers that are well-suited for safety-constrained linear systems. In addition to introducing this new baseline, we also prove several desirable continuity properties of the optimal controller in this baseline. In showing our main result, we prove that whenever the constraints impact the optimal controller, the non-linearity of our controller class leads to a faster rate of learning than in the unconstrained setting.