🤖 AI Summary
Existing theoretical analyses of reinforcement learning lack rigorous formal verification, particularly concerning the almost-sure convergence of classical algorithms under Markov sampling.
Method: We establish a unified, extensible formal framework in the Lean 4 theorem prover with the Mathlib library, grounded in the Robbins–Siegmund martingale convergence theorem from stochastic approximation theory.
Contribution/Results: This work provides the first formal, machine-checked proof of almost-sure convergence for both Q-learning and linear temporal-difference (TD) learning under Markov sampling—within a single logical foundation. The framework is designed to support extensions to convergence rates and alternative convergence modes (e.g., in probability or in mean square). All formalized proofs are publicly available, offering a reusable, trustworthy foundation for verifying reinforcement learning theory and enabling future mechanized analysis of learning algorithms.
📝 Abstract
In this paper, we formalize the almost sure convergence of $Q$-learning and linear temporal difference (TD) learning with Markovian samples using the Lean 4 theorem prover based on the Mathlib library. $Q$-learning and linear TD are among the earliest and most influential reinforcement learning (RL) algorithms. The investigation of their convergence properties is not only a major research topic during the early development of the RL field but also receives increasing attention nowadays. This paper formally verifies their almost sure convergence in a unified framework based on the Robbins-Siegmund theorem. The framework developed in this work can be easily extended to convergence rates and other modes of convergence. This work thus makes an important step towards fully formalizing convergent RL results. The code is available at https://github.com/ShangtongZhang/rl-theory-in-lean.