🤖 AI Summary
This study investigates how to effectively leverage offline demonstration data to improve sample efficiency in online reinforcement learning and systematically evaluates the contributions and composability of different demonstration-augmented strategies. We conduct a large-scale empirical comparison of three mainstream approaches—behavior cloning initialization, offline reinforcement learning pretraining, and demonstration replay—and for the first time delineate their respective regimes of effectiveness. Our results demonstrate that simpler strategies, such as behavior cloning initialization and direct reuse of offline data, consistently outperform more complex offline RL pretraining across most scenarios, and that combining these simple methods yields further performance gains. This work establishes clear and practical design principles for efficient demonstration-augmented reinforcement learning.
📝 Abstract
Several approaches have been proposed to improve the sample efficiency of online reinforcement learning (RL) by leveraging demonstrations collected offline. The offline data can be used directly as transitions to optimize RL objectives, or offline policy and value functions can first be learned from the data and then used for online finetuning or to provide reference actions. While each of these strategies has shown compelling results, it is unclear which method has the most impact on sample efficiency, whether these approaches can be combined, and if there are cumulative benefits. We classify existing demonstration-augmented RL approaches into three categories and perform an extensive empirical study of their strengths, weaknesses, and combinations to isolate the contribution of each strategy and determine effective hybrid combinations for sample-efficient online RL. Our analysis reveals that directly reusing offline data and initializing with behavior cloning consistently outperform more complex offline RL pretraining methods for improving online sample efficiency.