🤖 AI Summary
Offline reinforcement learning (RL) often relies on costly and labor-intensive manual reward annotation. This work proposes ReLOAD, the first framework to integrate Random Network Distillation (RND) into offline RL, enabling automatic intrinsic reward construction from expert demonstrations via self-distillation. Specifically, ReLOAD uses the embedding prediction error—between a fixed target network and a trainable predictor network—on expert state transitions as a structured, hand-crafted-free reward signal, eliminating the need for explicit reward labeling or complex alignment procedures. Theoretical analysis establishes the validity of this prediction error as an effective surrogate reward. Evaluated on the D4RL benchmark, ReLOAD achieves performance competitive with supervised baselines despite receiving no external reward annotations, thereby substantially enhancing the practicality, generalizability, and scalability of offline policy learning.
📝 Abstract
Offline Reinforcement Learning (RL) aims to learn effective policies from a static dataset without requiring further agent-environment interactions. However, its practical adoption is often hindered by the need for explicit reward annotations, which can be costly to engineer or difficult to obtain retrospectively. To address this, we propose ReLOAD (Reinforcement Learning with Offline Reward Annotation via Distillation), a novel reward annotation framework for offline RL. Unlike existing methods that depend on complex alignment procedures, our approach adapts Random Network Distillation (RND) to generate intrinsic rewards from expert demonstrations using a simple yet effective embedding discrepancy measure. First, we train a predictor network to mimic a fixed target network's embeddings based on expert state transitions. Later, the prediction error between these networks serves as a reward signal for each transition in the static dataset. This mechanism provides a structured reward signal without requiring handcrafted reward annotations. We provide a formal theoretical construct that offers insights into how RND prediction errors effectively serve as intrinsic rewards by distinguishing expert-like transitions. Experiments on the D4RL benchmark demonstrate that ReLOAD enables robust offline policy learning and achieves performance competitive with traditional reward-annotated methods.