Stochastic Halpern iteration in normed spaces and applications to reinforcement learning

📅 2024-03-19
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
This paper studies solving fixed-point problems of nonexpansive and γ-contractive operators in normed finite-dimensional spaces via minibatch stochastic Halpern iteration, with applications to model-free reinforcement learning (RL). We introduce stochastic Halpern iteration to RL fixed-point computation for the first time. For nonexpansive operators, our method achieves an oracle complexity of Õ(ε⁻⁵), improving upon existing stochastic Krasnosel’skii–Mann schemes; for γ-contractive operators, it attains O(ε⁻²(1−γ)⁻³). We further establish a universal Ω(ε⁻³) lower bound on oracle complexity. The algorithm operates under weak connectivity assumptions on Markov decision processes (MDPs), requires no prior knowledge of problem parameters, and constitutes the first provably convergent model-free algorithm for average-reward optimization.

Technology Category

Application Category

📝 Abstract
We analyze the oracle complexity of the stochastic Halpern iteration with minibatch, where we aim to approximate fixed-points of nonexpansive and contractive operators in a normed finite-dimensional space. We show that if the underlying stochastic oracle has uniformly bounded variance, our method exhibits an overall oracle complexity of $ ilde{O}(varepsilon^{-5})$, to obtain $varepsilon$ expected fixed-point residual for nonexpansive operators, improving recent rates established for the stochastic Krasnoselskii-Mann iteration. Also, we establish a lower bound of $Omega(varepsilon^{-3})$ which applies to a wide range of algorithms, including all averaged iterations even with minibatching. Using a suitable modification of our approach, we derive a $O(varepsilon^{-2}(1-gamma)^{-3})$ complexity bound in the case in which the operator is a $gamma$-contraction to obtain an approximation of the fixed-point. As an application, we propose new model-free algorithms for average and discounted reward MDPs. For the average reward case, our method applies to weakly communicating MDPs without requiring prior parameter knowledge.
Problem

Research questions and friction points this paper is trying to address.

Analyze stochastic Halpern iteration for fixed-point approximation
Improve oracle complexity bounds for nonexpansive operators
Develop model-free algorithms for MDP reward optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic Halpern iteration with minibatch
Improved oracle complexity for fixed-points
Model-free algorithms for MDPs
🔎 Similar Papers
No similar papers found.