Is Bellman Equation Enough for Learning Control?

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper reveals the non-uniqueness of solutions to the Bellman equation in continuous-state spaces—particularly, linear systems admit at least $inom{2n}{n}$ distinct solutions—causing value-based learning to converge to spurious “optimal” policies that satisfy the Bellman equation but yield unstable closed-loop dynamics. Method: We propose a positive-definite neural network architecture that structurally enforces value function positivity via parameter constraints, thereby embedding Bellman equation solving within a Lyapunov stability framework. Contribution/Results: We provide the first quantitative characterization of solution multiplicity and establish a stability-driven, structured value-function modeling paradigm. Theoretically, our design guarantees existence and convergence to the unique stable optimal solution. Empirically, it achieves 100% stable convergence on both linear and nonlinear systems, significantly enhancing closed-loop robustness. Crucially, it delivers verifiable convergence guarantees—unattainable under conventional unconstrained value-function approximators.

Technology Category

Application Category

📝 Abstract
The Bellman equation and its continuous-time counterpart, the Hamilton-Jacobi-Bellman (HJB) equation, serve as necessary conditions for optimality in reinforcement learning and optimal control. While the value function is known to be the unique solution to the Bellman equation in tabular settings, we demonstrate that this uniqueness fails to hold in continuous state spaces. Specifically, for linear dynamical systems, we prove the Bellman equation admits at least $inom{2n}{n}$ solutions, where $n$ is the state dimension. Crucially, only one of these solutions yields both an optimal policy and a stable closed-loop system. We then demonstrate a common failure mode in value-based methods: convergence to unstable solutions due to the exponential imbalance between admissible and inadmissible solutions. Finally, we introduce a positive-definite neural architecture that guarantees convergence to the stable solution by construction to address this issue.
Problem

Research questions and friction points this paper is trying to address.

Uniqueness of Bellman equation solutions fails in continuous state spaces.
Value-based methods may converge to unstable, non-optimal solutions.
Proposed neural architecture ensures convergence to stable, optimal solutions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies multiple Bellman equation solutions
Highlights instability in value-based methods
Introduces stable neural architecture solution
🔎 Similar Papers
No similar papers found.