Adaptive Resolving Methods for Reinforcement Learning with Function Approximations

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses policy optimization in reinforcement learning under large or infinite state-action spaces. We propose an adaptive online algorithm based on linear programming (LP) reoptimization, integrating linear feature mapping for function approximation and incrementally solving LPs to dynamically update policies—thereby substantially reducing sample complexity. Theoretically, we establish the first instance-dependent suboptimality bound of $ ilde{O}(1/N)$, improving upon the classical worst-case bound of $O(1/sqrt{N})$. Empirically, our method demonstrates faster convergence and higher sample efficiency than existing baselines. The core innovation lies in incorporating instance-specific structural information into both LP reformulation and online update mechanisms, enabling tighter and more adaptive policy learning.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) problems are fundamental in online decision-making and have been instrumental in finding an optimal policy for Markov decision processes (MDPs). Function approximations are usually deployed to handle large or infinite state-action space. In our work, we consider the RL problems with function approximation and we develop a new algorithm to solve it efficiently. Our algorithm is based on the linear programming (LP) reformulation and it resolves the LP at each iteration improved with new data arrival. Such a resolving scheme enables our algorithm to achieve an instance-dependent sample complexity guarantee, more precisely, when we have $N$ data, the output of our algorithm enjoys an instance-dependent $ ilde{O}(1/N)$ suboptimality gap. In comparison to the $O(1/sqrt{N})$ worst-case guarantee established in the previous literature, our instance-dependent guarantee is tighter when the underlying instance is favorable, and the numerical experiments also reveal the efficient empirical performances of our algorithms.
Problem

Research questions and friction points this paper is trying to address.

Develops adaptive RL algorithm for function approximations
Improves sample complexity with instance-dependent guarantees
Resolves linear programming iteratively for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

LP reformulation for RL with function approximations
Iterative resolving with new data arrival
Instance-dependent sample complexity guarantee
🔎 Similar Papers
No similar papers found.