Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical foundations and efficient algorithms for policy optimization in Markov decision processes (MDPs) with unbounded costs and general state and action spaces. By formulating the MDP as an optimization problem over linear operators in a function space, the paper systematically introduces perturbation theory from functional analysis to derive gradients of the objective function, thereby establishing a policy gradient framework applicable to general MDPs. Building on this foundation, the authors propose a low-complexity proximal policy optimization (PPO)-style algorithm that overcomes the limitations of prior methods, which are typically confined to finite spaces or specific function approximators. This approach successfully extends classical reinforcement learning theory to general MDPs and enables efficient policy optimization in continuous or large-scale state-action spaces.

Technology Category

Application Category

📝 Abstract
Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.
Problem

Research questions and friction points this paper is trying to address.

Markov decision processes
unbounded costs
general state and action spaces
policy gradient methods
operator-theoretic foundations
Innovation

Methods, ideas, or system contributions that make the work stand out.

operator-theoretic
policy gradient
unbounded costs
general MDPs
PPO-type algorithms
🔎 Similar Papers
No similar papers found.