Analysis of Control Bellman Residual Minimization for Markov Decision Problem

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the instability commonly encountered in policy optimization under function approximation within Markov decision processes by systematically investigating control-oriented Bellman residual minimization. The approach constructs a policy optimization framework tailored for function approximation by directly minimizing the policy-related Bellman residual. For the first time, this work establishes a complete theoretical foundation for the method, rigorously proving its convergence and stability, thereby filling a long-standing gap in theoretical support for this line of research. The results demonstrate that, compared to conventional dynamic programming methods, the proposed approach exhibits more reliable convergence properties in control tasks involving function approximation.

Technology Category

Application Category

📝 Abstract
Markov decision problems are most commonly solved via dynamic programming. Another approach is Bellman residual minimization, which directly minimizes the squared Bellman residual objective function. However, compared to dynamic programming, this approach has received relatively less attention, mainly because it is often less efficient in practice and can be more difficult to extend to model-free settings such as reinforcement learning. Nonetheless, Bellman residual minimization has several advantages that make it worth investigating, such as more stable convergence with function approximation for value functions. While Bellman residual methods for policy evaluation have been widely studied, methods for policy optimization (control tasks) have been scarcely explored. In this paper, we establish foundational results for the control Bellman residual minimization for policy optimization.
Problem

Research questions and friction points this paper is trying to address.

Bellman residual minimization
Markov decision problem
policy optimization
control tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bellman residual minimization
policy optimization
Markov decision processes
control tasks
function approximation
🔎 Similar Papers
No similar papers found.
Donghwan Lee
Donghwan Lee
KAIST
Decision makingcontroland optimization
H
Hyukjun Yang
Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, South Korea