🤖 AI Summary
This work addresses the poor generalizability and high task-specific engineering cost of hand-crafted reward functions in reinforcement learning. We propose a video-driven reward generation framework that eliminates manual reward design. Its core innovation is the first integration of pre-trained video diffusion models into RL reward modeling: leveraging their implicitly learned world dynamics, the framework generates both video-level and frame-level goal-directed reward signals. To enhance semantic relevance, we employ CLIP-based filtering to identify key frames; additionally, forward–backward representation learning is introduced to improve temporal coherence of the policy. Evaluated on the Meta-World multi-task benchmark, our method significantly improves agent performance on complex visual-goal tasks while fully decoupling learning from task-specific reward engineering. Results demonstrate superior generalization across diverse manipulation tasks without requiring any hand-designed reward function.
📝 Abstract
Reinforcement Learning (RL) has achieved remarkable success in various domains, yet it often relies on carefully designed programmatic reward functions to guide agent behavior. Designing such reward functions can be challenging and may not generalize well across different tasks. To address this limitation, we leverage the rich world knowledge contained in pretrained video diffusion models to provide goal-driven reward signals for RL agents without ad-hoc design of reward. Our key idea is to exploit off-the-shelf video diffusion models pretrained on large-scale video datasets as informative reward functions in terms of video-level and frame-level goals. For video-level rewards, we first finetune a pretrained video diffusion model on domain-specific datasets and then employ its video encoder to evaluate the alignment between the latent representations of agent's trajectories and the generated goal videos. To enable more fine-grained goal-achievement, we derive a frame-level goal by identifying the most relevant frame from the generated video using CLIP, which serves as the goal state. We then employ a learned forward-backward representation that represents the probability of visiting the goal state from a given state-action pair as frame-level reward, promoting more coherent and goal-driven trajectories. Experiments on various Meta-World tasks demonstrate the effectiveness of our approach.