From Optimization to Control: Quasi Policy Iteration

📅 2023-11-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Standard policy iteration (PI) suffers from slow convergence and sensitivity to the discount factor, while value iteration (VI) converges slowly and cannot exploit second-order information. Method: This paper proposes Quasi-Policy Iteration (QPI), a novel algorithm that constructs, for the first time, a Hessian approximation satisfying the linear structural constraints inherent in Markov decision processes (MDPs), augmented with prior knowledge of the transition kernel to improve approximation fidelity. Contribution/Results: QPI retains the per-iteration computational complexity of VI—O(|S|²|A|)—yet empirically achieves convergence rates comparable to standard PI and exhibits strong robustness to variations in the discount factor. By establishing a systematic analogy between convex optimization quasi-Newton methods and MDP control, this work introduces a new paradigm for efficient and stable policy optimization.
📝 Abstract
Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we review this analogy across four problem classes with a unified solution characterization allowing for a systematic transformation of algorithms from one domain to the other. In particular, we identify equivalent optimization and control algorithms that have already been pointed out in the existing literature, but mostly in a scattered way. With this unifying framework in mind, we adopt the quasi-Newton method from convex optimization to introduce a novel control algorithm coined as quasi-policy iteration (QPI). In particular, QPI is based on a novel approximation of the"Hessian"matrix in the policy iteration algorithm by exploiting two linear structural constraints specific to MDPs and by allowing for the incorporation of prior information on the transition probability kernel. While the proposed algorithm has the same computational complexity as value iteration, it interestingly exhibits an empirical convergence behavior similar to policy iteration with a very low sensitivity to the discount factor.
Problem

Research questions and friction points this paper is trying to address.

Develops quasi-policy iteration for MDP control
Approximates Hessian matrix using MDP structural constraints
Reduces sensitivity to discount factor in convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quasi-Newton method applied to policy iteration
Approximates Hessian matrix with MDP constraints
Incorporates prior transition probability information
🔎 Similar Papers
No similar papers found.
M
M. A. S. Kolarijani
Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
Peyman Mohajerin Esfahani
Peyman Mohajerin Esfahani
Unknown affiliation