Optimistic Training and Convergence of Q-Learning -- Extended Version

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unresolved convergence properties of Q-learning with function approximation in non-tabular or nonlinear Markov decision processes, particularly concerning the uniqueness of solutions to the projected Bellman equation (PBE) and algorithmic stability. Through linear function approximation, PBE analysis, counterexample construction, and policy sensitivity investigation, the work systematically examines convergence behavior under both oblivious policies and (ε,κ)-tamed Gibbs policies. The key finding is that even with ideal basis functions, greedy policies may lead to nonexistence or multiplicity of PBE solutions, while (ε,κ)-tamed Gibbs policies—though ensuring boundedness—do not necessarily guarantee solution uniqueness. These results demonstrate that convergence of Q-learning requires stronger structural conditions than commonly assumed, thereby challenging prevailing theoretical foundations.

Technology Category

Application Category

📝 Abstract
In recent work it is shown that Q-learning with linear function approximation is stable, in the sense of bounded parameter estimates, under the $(\varepsilon,\kappa)$-tamed Gibbs policy; $\kappa$ is inverse temperature, and $\varepsilon>0$ is introduced for additional exploration. Under these assumptions it also follows that there is a solution to the projected Bellman equation (PBE). Left open is uniqueness of the solution, and criteria for convergence outside of the standard tabular or linear MDP settings. The present work extends these results to other variants of Q-learning, and clarifies prior work: a one dimensional example shows that under an oblivious policy for training there may be no solution to the PBE, or multiple solutions, and in each case the algorithm is not stable under oblivious training. The main contribution is that far more structure is required for convergence. An example is presented for which the basis is ideal, in the sense that the true Q-function is in the span of the basis. However, there are two solutions to the PBE under the greedy policy, and hence also for the $(\varepsilon,\kappa)$-tamed Gibbs policy for all sufficiently small $\varepsilon>0$ and $\kappa\ge 1$.
Problem

Research questions and friction points this paper is trying to address.

Q-learning
Projected Bellman Equation
Convergence
Function Approximation
Policy Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Q-learning
projected Bellman equation
linear function approximation
convergence analysis
policy dependence
🔎 Similar Papers
No similar papers found.