A Policy Gradient Approach for Finite Horizon Constrained Markov Decision Processes

📅 2022-10-10
🏛️ IEEE Conference on Decision and Control
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of learning non-stationary optimal policies for finite-horizon constrained Markov decision processes (CMDPs), filling a theoretical gap left by prior studies focused on infinite-horizon settings and stationary policies. We propose the first policy gradient algorithm specifically designed for finite-horizon CMDPs, employing time-varying parametric policy networks and Lagrangian relaxation to handle hard constraints. We provide rigorous convergence guarantees—proving that the algorithm converges to a constraint-optimal solution under standard regularity conditions. The method accommodates continuous state-action spaces, supports function approximation, and scales effectively to high-dimensional problems. Empirical evaluation across multiple benchmark tasks demonstrates substantial improvements in cumulative reward, constraint satisfaction rate, and convergence stability, thereby validating both the theoretical assurances and practical efficacy of the approach.
📝 Abstract
The infinite horizon setting is widely adopted for problems of reinforcement learning (RL). These invariably result in stationary policies that are optimal. In many situations, finite horizon control problems are of interest and for such problems, the optimal policies are time-varying in general. Another setting that has become popular in recent times is of Constrained Reinforcement Learning, where the agent maximizes its rewards while it also aims to satisfy some given constraint criteria. However, this setting has only been studied in the context of infinite horizon MDPs where stationary policies are optimal. We present an algorithm for constrained RL in the Finite Horizon Setting where the horizon terminates after a fixed (finite) time. We use function approximation in our algorithm which is essential when the state and action spaces are large or continuous and use the policy gradient method to find the optimal policy. The optimal policy that we obtain depends on the stage and so is non-stationary in general. To the best of our knowledge, our paper presents the first policy gradient algorithm for the finite horizon setting with constraints. We show the convergence of our algorithm to a constrained optimal policy. We also compare and analyze the performance of our algorithm through experiments and show that our algorithm performs better than some other well known algorithms.
Problem

Research questions and friction points this paper is trying to address.

Develops policy gradient algorithm for finite horizon constrained MDPs.
Addresses non-stationary optimal policies in finite horizon settings.
Ensures convergence to constrained optimal policies with function approximation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy gradient method for finite horizon
Function approximation for large spaces
Non-stationary optimal policy with constraints
🔎 Similar Papers
No similar papers found.
S
Soumyajit Guin
Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560012, India
S
S. Bhatnagar
Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560012, India