Sample Efficient Active Algorithms for Offline Reinforcement Learning

๐Ÿ“… 2026-02-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Offline reinforcement learning is fundamentally limited by insufficient state-action coverage and distributional shift. This work proposes an active reinforcement learning (ActiveRL) approach that leverages a small amount of online interaction to selectively query high-uncertainty regions, thereby refining the value function. For the first time, we provide a rigorous sample complexity analysis for ActiveRL, modeling uncertainty via Gaussian processes and combining concentration inequalities with information-theoretic bounds on information gain. We prove that only ๐’ช(1/ฮตยฒ) active queries are sufficient to learn an ฮต-optimal policy, which significantly improves upon the ฮฉ(1/(ฮตยฒ(1โˆ’ฮณ)โด)) lower bound inherent to purely offline methods. Empirical evaluations further demonstrate the efficiency and effectiveness of the proposed approach.

Technology Category

Application Category

๐Ÿ“ Abstract
Offline reinforcement learning (RL) enables policy learning from static data but often suffers from poor coverage of the state-action space and distributional shift problems. This problem can be addressed by allowing limited online interactions to selectively refine uncertain regions of the learned value function, which is referred to as Active Reinforcement Learning (ActiveRL). While there has been good empirical success, no theoretical analysis is available in the literature. We fill this gap by developing a rigorous sample-complexity analysis of ActiveRL through the lens of Gaussian Process (GP) uncertainty modeling. In this respect, we propose an algorithm and using GP concentration inequalities and information-gain bounds, we derive high-probability guarantees showing that an $\epsilon$-optimal policy can be learned with ${\mathcal{O}}(1/\epsilon^2)$ active transitions, improving upon the $\Omega(1/\epsilon^2(1-\gamma)^4)$ rate of purely offline methods. Our results reveal that ActiveRL achieves near-optimal information efficiency, that is, guided uncertainty reduction leads to accelerated value-function convergence with minimal online data. Our analysis builds on GP concentration inequalities and information-gain bounds, bridging Bayesian nonparametric regression and reinforcement learning theories. We conduct several experiments to validate the algorithm and theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
distributional shift
state-action coverage
sample efficiency
active reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active Reinforcement Learning
Sample Complexity
Gaussian Process
Offline RL
Information Gain
๐Ÿ”Ž Similar Papers
No similar papers found.