TeViR: Text-to-Video Reward with Diffusion Models for Efficient Reinforcement Learning

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sample efficiency of reinforcement learning in robotic manipulation caused by sparse rewards, this paper proposes a general reward engineering framework leveraging text-to-video diffusion models. Specifically, it introduces the first application of pre-trained text-to-video diffusion models to construct dense, semantically aligned, and label-free cross-modal visual trajectory reward functions, enabling fine-grained action evaluation guided by natural language instructions. By integrating visual trajectory comparison and vision-language alignment techniques, the method achieves significant improvements in both sample efficiency and final task performance across 11 challenging robotic manipulation tasks. It consistently outperforms baselines relying on sparse environmental rewards as well as state-of-the-art reward modeling approaches—without requiring access to ground-truth reward signals from the environment.

Technology Category

Application Category

📝 Abstract
Developing scalable and generalizable reward engineering for reinforcement learning (RL) is crucial for creating general-purpose agents, especially in the challenging domain of robotic manipulation. While recent advances in reward engineering with Vision-Language Models (VLMs) have shown promise, their sparse reward nature significantly limits sample efficiency. This paper introduces TeViR, a novel method that leverages a pre-trained text-to-video diffusion model to generate dense rewards by comparing the predicted image sequence with current observations. Experimental results across 11 complex robotic tasks demonstrate that TeViR outperforms traditional methods leveraging sparse rewards and other state-of-the-art (SOTA) methods, achieving better sample efficiency and performance without ground truth environmental rewards. TeViR's ability to efficiently guide agents in complex environments highlights its potential to advance reinforcement learning applications in robotic manipulation.
Problem

Research questions and friction points this paper is trying to address.

Developing scalable reward engineering for reinforcement learning
Overcoming sparse reward limitations in robotic manipulation tasks
Generating dense rewards using text-to-video diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages text-to-video diffusion models
Generates dense rewards from predictions
Outperforms sparse reward methods
🔎 Similar Papers
No similar papers found.
Y
Yuhui Chen
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049
H
Haoran Li
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049
Zhennan Jiang
Zhennan Jiang
Institute of Automation, Chinese Academy of Sciences
Reinforcement learningRobotics
Haowei Wen
Haowei Wen
Carnegie Mellon University
roboticsreinforcement learning
Dongbin Zhao
Dongbin Zhao
Institute of Automation, Chinese Academy of Sciences
Deep Reinforcement LearningAdaptive Dynamic ProgrammingGame AISmart drivingrobotics