Near-Optimal Sample Complexity for Online Constrained MDPs

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge in online constrained Markov decision processes (CMDPs) of simultaneously ensuring safety and sample efficiency. The authors propose a model-based primal-dual algorithm that optimizes both cumulative regret and constraint violation under two regimes: relaxed feasibility (allowing minor violations) and strict feasibility (zero violations). Notably, the method achieves near-optimal sample complexity matching known lower bounds—specifically, Õ(SAH³/ε²) under relaxed feasibility and Õ(SAH⁵/(ε²ζ²)) under strict feasibility—demonstrating for the first time that online CMDP learning is no harder than learning in unconstrained MDPs or CMDPs with a generative model. The analysis leverages a problem-dependent Slater constant to characterize the size of the feasible region, highlighting a key theoretical innovation.

Technology Category

Application Category

📝 Abstract
Safety is a fundamental challenge in reinforcement learning (RL), particularly in real-world applications such as autonomous driving, robotics, and healthcare. To address this, Constrained Markov Decision Processes (CMDPs) are commonly used to enforce safety constraints while optimizing performance. However, existing methods often suffer from significant safety violations or require a high sample complexity to generate near-optimal policies. We address two settings: relaxed feasibility, where small violations are allowed, and strict feasibility, where no violation is allowed. We propose a model-based primal-dual algorithm that balances regret and bounded constraint violations, drawing on techniques from online RL and constrained optimization. For relaxed feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy with $\varepsilon$-bounded violation with arbitrarily high probability, requiring $\tilde{O}\left(\frac{SAH^3}{\varepsilon^2}\right)$ learning episodes, matching the lower bound for unconstrained MDPs. For strict feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy with zero violation with arbitrarily high probability, requiring $\tilde{O}\left(\frac{SAH^5}{\varepsilon^2ζ^2}\right)$ learning episodes, where $ζ$ is the problem-dependent Slater constant characterizing the size of the feasible region. This result matches the lower bound for learning CMDPs with access to a generative model. Our results demonstrate that learning CMDPs in an online setting is as easy as learning with a generative model and is no more challenging than learning unconstrained MDPs when small violations are allowed.
Problem

Research questions and friction points this paper is trying to address.

Constrained Markov Decision Processes
Online Reinforcement Learning
Sample Complexity
Safety Constraints
Regret Minimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained MDPs
Online Reinforcement Learning
Primal-Dual Algorithm
Sample Complexity
Safety Constraints
🔎 Similar Papers
No similar papers found.
C
Chang Liu
University of California, Los Angeles
Yunfan Li
Yunfan Li
Sichuan University, College of Computer Science, Chengdu, China
Clustering
L
Lin F. Yang
University of California, Los Angeles