LEASE: Offline Preference-based Reinforcement Learning with High Sample Efficiency

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high annotation cost and data scarcity of human preference labels in offline preference-based reinforcement learning (PbRL), this paper proposes the first theoretically grounded, sample-efficient framework. Methodologically, it constructs a learned transition model to generate unlabeled trajectories, employs uncertainty-aware active sampling to select high-confidence trajectory pairs, and trains a reward model via preference label distillation. Theoretically, it establishes the first generalization bound for reward models based on state-action pairs and rigorously proves that policy improvement is guaranteed without any online interaction. Empirically, the method achieves performance on par with fully labeled baselines while reducing preference annotations by ≥50%, entirely eliminating online interaction. Extensive evaluation across multiple standard offline RL benchmarks demonstrates both effectiveness and robustness.

Technology Category

Application Category

📝 Abstract
Offline preference-based reinforcement learning (PbRL) provides an effective way to overcome the challenges of designing reward and the high costs of online interaction. However, since labeling preference needs real-time human feedback, acquiring sufficient preference labels is challenging. To solve this, this paper proposes a offLine prEference-bAsed RL with high Sample Efficiency (LEASE) algorithm, where a learned transition model is leveraged to generate unlabeled preference data. Considering the pretrained reward model may generate incorrect labels for unlabeled data, we design an uncertainty-aware mechanism to ensure the performance of reward model, where only high confidence and low variance data are selected. Moreover, we provide the generalization bound of reward model to analyze the factors influencing reward accuracy, and demonstrate that the policy learned by LEASE has theoretical improvement guarantee. The developed theory is based on state-action pair, which can be easily combined with other offline algorithms. The experimental results show that LEASE can achieve comparable performance to baseline under fewer preference data without online interaction.
Problem

Research questions and friction points this paper is trying to address.

Offline Reinforcement Learning
Preference-based Learning
Data Collection
Innovation

Methods, ideas, or system contributions that make the work stand out.

LEASE algorithm
Preference-based Reinforcement Learning
High-confidence Data Points
🔎 Similar Papers
No similar papers found.