Efficient Preference-Based Reinforcement Learning: Randomized Exploration Meets Experimental Design

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses reinforcement learning from sparse, pairwise trajectory preferences over human demonstrations, aiming to efficiently infer an underlying reward function. We propose the first meta-algorithm that simultaneously provides theoretical guarantees and computational tractability: it generates trajectory pairs via randomized exploration, employs batched querying combined with A-optimal experimental design to enable parallel preference elicitation and substantially reduce query complexity, and interfaces with standard RL oracles. In general MDPs, the algorithm achieves an $O(sqrt{T})$ regret bound while ensuring convergence of the final policy. Empirical results demonstrate that the method attains policy performance comparable to explicit reward–driven RL using only a minimal number of preference queries—far fewer than required by conventional approaches—thereby validating both its theoretical rigor and practical efficiency.

Technology Category

Application Category

📝 Abstract
We study reinforcement learning from human feedback in general Markov decision processes, where agents learn from trajectory-level preference comparisons. A central challenge in this setting is to design algorithms that select informative preference queries to identify the underlying reward while ensuring theoretical guarantees. We propose a meta-algorithm based on randomized exploration, which avoids the computational challenges associated with optimistic approaches and remains tractable. We establish both regret and last-iterate guarantees under mild reinforcement learning oracle assumptions. To improve query complexity, we introduce and analyze an improved algorithm that collects batches of trajectory pairs and applies optimal experimental design to select informative comparison queries. The batch structure also enables parallelization of preference queries, which is relevant in practical deployment as feedback can be gathered concurrently. Empirical evaluation confirms that the proposed method is competitive with reward-based reinforcement learning while requiring a small number of preference queries.
Problem

Research questions and friction points this paper is trying to address.

Learning from human feedback in Markov decision processes
Designing algorithms for informative preference queries
Improving query complexity with batch trajectory pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized exploration avoids optimistic computation challenges
Batch trajectory pairs with optimal experimental design
Parallel preference queries for practical deployment
🔎 Similar Papers
No similar papers found.