🤖 AI Summary
This work addresses the joint solution of entropy-regularized relaxed control and classical stochastic optimal control over an infinite horizon. A key challenge lies in the difficulty of synchronously updating the value function and optimal policy while ensuring convergence guarantees. To resolve this, we propose a continuous-time policy-value iteration algorithm that, for the first time, incorporates Langevin-type stochastic differential equations into the policy iteration framework. Under a monotonicity condition on the Hamiltonian, we establish rigorous global convergence to the optimal solution. The method enables concurrent nonconvex optimization and state-distribution sampling, achieving end-to-end joint optimization of the value function and control policy. Theoretically, our contribution is a unified convergence analysis framework bridging continuous policy iteration with stochastic sampling dynamics—significantly expanding the rigorously justified applicability of machine learning methods in stochastic optimal control.
📝 Abstract
We introduce a continuous policy-value iteration algorithm where the approximations of the value function of a stochastic control problem and the optimal control are simultaneously updated through Langevin-type dynamics. This framework applies to both the entropy-regularized relaxed control problems and the classical control problems, with infinite horizon. We establish policy improvement and demonstrate convergence to the optimal control under the monotonicity condition of the Hamiltonian. By utilizing Langevin-type stochastic differential equations for continuous updates along the policy iteration direction, our approach enables the use of distribution sampling and non-convex learning techniques in machine learning to optimize the value function and identify the optimal control simultaneously.