An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies online safe reinforcement learning in dynamic adversarial environments—specifically, constrained Markov decision processes (CMDPs) with unknown, time-varying, and potentially adversarial safety constraints—where cumulative reward maximization and real-time safety satisfaction must be jointly achieved. To overcome limitations of existing methods—including failure under adversarial constraints, reliance on Slater’s condition, or access to a priori safe policies—we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first of its kind. OMDPD integrates online convex optimization, dual updates, and model uncertainty estimation without requiring any feasibility assumptions. It achieves optimal $mathcal{O}(sqrt{K})$ cumulative regret and $mathcal{O}(sqrt{K})$ cumulative constraint violation over $K$ episodes. With accurate reward and transition estimates, performance bounds can be further tightened. The framework provides both theoretical guarantees and practical applicability for high-stakes domains such as autonomous driving and robotics.

Technology Category

Application Category

📝 Abstract
Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret O(sqrt(K)) and strong constraint violation O(sqrt(K)) without relying on Slater's condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.
Problem

Research questions and friction points this paper is trying to address.

Online safe RL for dynamic environments with adversarial constraints
Optimal policy learning under time-varying unknown safety constraints
Achieving sublinear regret without strict safe policy assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimistic Mirror Descent Primal-Dual algorithm
Handles anytime adversarial constraints effectively
Achieves optimal regret without Slater's condition
🔎 Similar Papers
No similar papers found.
J
Jiahui Zhu
School of Electrical Engineering & Computer Science, Pullman, USA
K
Kihyun Yu
Department of Industrial & Systems Engineering, KAIST, Daejeon, South Korea
Dabeen Lee
Dabeen Lee
Department of Mathematical Sciences, Seoul National University
OptimizationMathematical ProgrammingAlgorithmsMachine Learning
X
Xin Liu
School of Information Science & Technology, ShanghaiTech University, Shanghai, China
Honghao Wei
Honghao Wei
Assistant Professor of EECS, Washington State University
Reinforcement LearningOptimizationSafe-RL