Rainbow-DemoRL: Combining Improvements in Demonstration-Augmented Reinforcement Learning

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how to effectively leverage offline demonstration data to improve sample efficiency in online reinforcement learning and systematically evaluates the contributions and composability of different demonstration-augmented strategies. We conduct a large-scale empirical comparison of three mainstream approaches—behavior cloning initialization, offline reinforcement learning pretraining, and demonstration replay—and for the first time delineate their respective regimes of effectiveness. Our results demonstrate that simpler strategies, such as behavior cloning initialization and direct reuse of offline data, consistently outperform more complex offline RL pretraining across most scenarios, and that combining these simple methods yields further performance gains. This work establishes clear and practical design principles for efficient demonstration-augmented reinforcement learning.
📝 Abstract
Several approaches have been proposed to improve the sample efficiency of online reinforcement learning (RL) by leveraging demonstrations collected offline. The offline data can be used directly as transitions to optimize RL objectives, or offline policy and value functions can first be learned from the data and then used for online finetuning or to provide reference actions. While each of these strategies has shown compelling results, it is unclear which method has the most impact on sample efficiency, whether these approaches can be combined, and if there are cumulative benefits. We classify existing demonstration-augmented RL approaches into three categories and perform an extensive empirical study of their strengths, weaknesses, and combinations to isolate the contribution of each strategy and determine effective hybrid combinations for sample-efficient online RL. Our analysis reveals that directly reusing offline data and initializing with behavior cloning consistently outperform more complex offline RL pretraining methods for improving online sample efficiency.
Problem

Research questions and friction points this paper is trying to address.

demonstration-augmented reinforcement learning
sample efficiency
offline demonstrations
online reinforcement learning
empirical study
Innovation

Methods, ideas, or system contributions that make the work stand out.

demonstration-augmented RL
sample efficiency
offline data reuse
behavior cloning
empirical study
🔎 Similar Papers
No similar papers found.
Dwait Bhatt
Dwait Bhatt
Robotics Graduate Student, UC San Diego
Reinforcement LearningRoboticsMachine LearningOn-Device AI
S
Shih-Chieh Chou
Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Taiwan
N
Nikolay Atanasov
Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA