🤖 AI Summary
Standard policy iteration (PI) suffers from slow convergence and sensitivity to the discount factor, while value iteration (VI) converges slowly and cannot exploit second-order information. Method: This paper proposes Quasi-Policy Iteration (QPI), a novel algorithm that constructs, for the first time, a Hessian approximation satisfying the linear structural constraints inherent in Markov decision processes (MDPs), augmented with prior knowledge of the transition kernel to improve approximation fidelity. Contribution/Results: QPI retains the per-iteration computational complexity of VI—O(|S|²|A|)—yet empirically achieves convergence rates comparable to standard PI and exhibits strong robustness to variations in the discount factor. By establishing a systematic analogy between convex optimization quasi-Newton methods and MDP control, this work introduces a new paradigm for efficient and stable policy optimization.
📝 Abstract
Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we review this analogy across four problem classes with a unified solution characterization allowing for a systematic transformation of algorithms from one domain to the other. In particular, we identify equivalent optimization and control algorithms that have already been pointed out in the existing literature, but mostly in a scattered way. With this unifying framework in mind, we adopt the quasi-Newton method from convex optimization to introduce a novel control algorithm coined as quasi-policy iteration (QPI). In particular, QPI is based on a novel approximation of the"Hessian"matrix in the policy iteration algorithm by exploiting two linear structural constraints specific to MDPs and by allowing for the incorporation of prior information on the transition probability kernel. While the proposed algorithm has the same computational complexity as value iteration, it interestingly exhibits an empirical convergence behavior similar to policy iteration with a very low sensitivity to the discount factor.