List Replicable Reinforcement Learning

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) suffers from poor reproducibility due to unstable policy outputs that are highly sensitive to initialization and stochasticity. Method: We propose *list reproducibility*, a new paradigm requiring the algorithm to output, with high probability, a small list of policies containing at least one near-optimal policy. To achieve this, we formalize weak and strong reproducibility criteria, and design a novel planning strategy combining lexicographic decision-making with randomized tolerance thresholds. Our approach integrates tabular RL algorithm design, state reachability analysis, and randomized policy selection. Contribution/Results: We establish the first polynomial *list complexity* guarantee within the PAC-RL framework—i.e., the list size grows only polynomially in the number of states, actions, and horizon length. Theoretical analysis proves correctness and efficiency, while empirical evaluation demonstrates significantly improved training stability and cross-run consistency.

Technology Category

Application Category

📝 Abstract
Replicability is a fundamental challenge in reinforcement learning (RL), as RL algorithms are empirically observed to be unstable and sensitive to variations in training conditions. To formally address this issue, we study emph{list replicability} in the Probably Approximately Correct (PAC) RL framework, where an algorithm must return a near-optimal policy that lies in a emph{small list} of policies across different runs, with high probability. The size of this list defines the emph{list complexity}. We introduce both weak and strong forms of list replicability: the weak form ensures that the final learned policy belongs to a small list, while the strong form further requires that the entire sequence of executed policies remains constrained. These objectives are challenging, as existing RL algorithms exhibit exponential list complexity due to their instability. Our main theoretical contribution is a provably efficient tabular RL algorithm that guarantees list replicability by ensuring the list complexity remains polynomial in the number of states, actions, and the horizon length. We further extend our techniques to achieve strong list replicability, bounding the number of possible policy execution traces polynomially with high probability. Our theoretical result is made possible by key innovations including (i) a novel planning strategy that selects actions based on lexicographic order among near-optimal choices within a randomly chosen tolerance threshold, and (ii) a mechanism for testing state reachability in stochastic environments while preserving replicability. Finally, we demonstrate that our theoretical investigation sheds light on resolving the emph{instability} issue of RL algorithms used in practice. In particular, we show that empirically, our new planning strategy can be incorporated into practical RL frameworks to enhance their stability.
Problem

Research questions and friction points this paper is trying to address.

Addresses instability in reinforcement learning algorithms
Ensures near-optimal policies belong to a small list across runs
Provides theoretical guarantees for list replicability in PAC RL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Planning with lexicographic order among near-optimal choices
Testing state reachability in stochastic environments
Bounding policy list size polynomially for replicability
🔎 Similar Papers
No similar papers found.
B
Bohan Zhang
Peking University
Michael Chen
Michael Chen
Undergraduate, Carnegie Mellon University
A
A. Pavan
Iowa State University
N
N. V. Vinodchandran
University of Nebraska–Lincoln
L
Lin F. Yang
University of California, Los Angeles
Ruosong Wang
Ruosong Wang
Assistant Professor, Peking University
reinforcement learning