Learning Deterministic Policies with Policy Gradients in Constrained Markov Decision Processes

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses deterministic policy learning in constrained Markov decision processes (CMDPs), aiming to maximize expected return while satisfying hard constraint violations. To overcome the lack of theoretical guarantees and deployment challenges associated with existing methods—particularly those relying on stochastic policies—we propose exploration-agnostic Constrained Policy Gradient (C-PG), the first algorithm to achieve global last-iterate convergence under the gradient-dominance assumption. Theoretically, C-PG ensures smooth convergence from randomized initialization to an optimal deterministic policy. We further introduce two variants—C-PGAE (action-entropy regularized) and C-PGPE (parameter-perturbation enhanced)—that jointly integrate constrained policy optimization with action- or parameter-space perturbation modeling. Extensive experiments across multiple constrained control benchmarks demonstrate that C-PG significantly outperforms state-of-the-art baselines in terms of optimality, robustness, and deployability of deterministic policies.

Technology Category

Application Category

📝 Abstract
Constrained Reinforcement Learning (CRL) addresses sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints. In this setting, policy-based methods are widely used thanks to their advantages when dealing with continuous-control problems. These methods search in the policy space with an action-based or a parameter-based exploration strategy, depending on whether they learn the parameters of a stochastic policy or those of a stochastic hyperpolicy. We introduce an exploration-agnostic algorithm, called C-PG, which enjoys global last-iterate convergence guarantees under gradient domination assumptions. Furthermore, under specific noise models where the (hyper)policy is expressed as a stochastic perturbation of the actions or of the parameters of an underlying deterministic policy, we additionally establish global last-iterate convergence guarantees of C-PG to the optimal deterministic policy. This holds when learning a stochastic (hyper)policy and subsequently switching off the stochasticity at the end of training, thereby deploying a deterministic policy. Finally, we empirically validate both the action-based (C-PGAE) and parameter-based (C-PGPE) variants of C-PG on constrained control tasks, and compare them against state-of-the-art baselines, demonstrating their effectiveness, in particular when deploying deterministic policies after training.
Problem

Research questions and friction points this paper is trying to address.

Develop deterministic policy learning in constrained reinforcement learning.
Ensure global convergence under gradient domination assumptions.
Validate effectiveness of deterministic policies post-training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

C-PG algorithm for constrained reinforcement learning
Global convergence with deterministic policy guarantees
Action-based and parameter-based exploration variants
🔎 Similar Papers
No similar papers found.