Continuous Policy and Value Iteration for Stochastic Control Problems and Its Convergence

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the joint solution of entropy-regularized relaxed control and classical stochastic optimal control over an infinite horizon. A key challenge lies in the difficulty of synchronously updating the value function and optimal policy while ensuring convergence guarantees. To resolve this, we propose a continuous-time policy-value iteration algorithm that, for the first time, incorporates Langevin-type stochastic differential equations into the policy iteration framework. Under a monotonicity condition on the Hamiltonian, we establish rigorous global convergence to the optimal solution. The method enables concurrent nonconvex optimization and state-distribution sampling, achieving end-to-end joint optimization of the value function and control policy. Theoretically, our contribution is a unified convergence analysis framework bridging continuous policy iteration with stochastic sampling dynamics—significantly expanding the rigorously justified applicability of machine learning methods in stochastic optimal control.

Technology Category

Application Category

📝 Abstract
We introduce a continuous policy-value iteration algorithm where the approximations of the value function of a stochastic control problem and the optimal control are simultaneously updated through Langevin-type dynamics. This framework applies to both the entropy-regularized relaxed control problems and the classical control problems, with infinite horizon. We establish policy improvement and demonstrate convergence to the optimal control under the monotonicity condition of the Hamiltonian. By utilizing Langevin-type stochastic differential equations for continuous updates along the policy iteration direction, our approach enables the use of distribution sampling and non-convex learning techniques in machine learning to optimize the value function and identify the optimal control simultaneously.
Problem

Research questions and friction points this paper is trying to address.

Develop continuous policy-value iteration for stochastic control
Apply algorithm to entropy-regularized and classical control problems
Ensure convergence to optimal control via Hamiltonian conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous policy-value iteration via Langevin dynamics
Applies to entropy-regularized and classical control problems
Combines distribution sampling and non-convex learning
🔎 Similar Papers
No similar papers found.
Q
Qi Feng
Department of Mathematics, Florida State University, Tallahassee, 32306
Gu Wang
Gu Wang
Tsinghua University
Vision in Robotics3D VisionPose Estimation