Reinforcement Learning with Function Approximation for Non-Markov Processes

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of convergence in reinforcement learning under non-Markovian state and cost processes by establishing a theoretical foundation within the function approximation framework. By constructing an auxiliary Markov decision process (MDP) and leveraging orthogonal projection together with Bellman operator analysis, the paper proves that policy evaluation converges—under ergodicity conditions—to the projected Bellman fixed point of this auxiliary MDP. The study innovatively introduces a basis function selection method based on quantization mappings and, for the first time, establishes the convergence of Q-learning with linear function approximation in non-Markovian environments. Furthermore, it provides explicit error bounds for finite-memory state representations in partially observable MDPs (POMDPs).

Technology Category

Application Category

📝 Abstract
We study reinforcement learning methods with linear function approximation under non-Markov state and cost processes. We first consider the policy evaluation method and show that the algorithm converges under suitable ergodicity conditions on the underlying non-Markov processes. Furthermore, we show that the limit corresponds to the fixed point of a joint operator composed of an orthogonal projection and the Bellman operator of an auxiliary \emph{Markov} decision process. For Q-learning with linear function approximation, as in the Markov setting, convergence is not guaranteed in general. We show, however, that for the special case where the basis functions are chosen based on quantization maps, the convergence can be shown under similar ergodicity conditions. Finally, we apply our results to partially observed Markov decision processes, where finite-memory variables are used as state representations, and we derive explicit error bounds for the limits of the resulting learning algorithms.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Function Approximation
Non-Markov Processes
Convergence
Policy Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-Markov processes
linear function approximation
ergodicity
quantization maps
partially observed MDPs
🔎 Similar Papers
No similar papers found.