🤖 AI Summary
Reinforcement learning typically relies on hand-crafted reward functions, making it challenging to obtain reliable goal signals for real-world robotic tasks. This work addresses the problem by learning a visually dense reward function from action-free, subtask-segmented video demonstrations—eliminating dependence on manual reward engineering and action-level annotations. Methodologically, we (i) introduce subtask segmentation points as weak supervision signals for the first time; (ii) propose Equivalence-Preserving Invariant Comparison (EPIC) distance to measure state similarity under policy equivalence; and (iii) integrate a subtask-conditioned video encoder with contrastive learning to achieve vision–semantics alignment. Our approach enables cross-task and cross-platform generalization. Evaluated on Meta-World and FurnitureBench, it significantly outperforms baselines: with only minimal segmentation annotations, it achieves 83.6% success rate on complex assembly tasks and supports zero-shot transfer to novel tasks and robotic arms.
📝 Abstract
Reinforcement Learning (RL) agents have demonstrated their potential across various robotic tasks. However, they still heavily rely on human-engineered reward functions, requiring extensive trial-and-error and access to target behavior information, often unavailable in real-world settings. This paper introduces REDS: REward learning from Demonstration with Segmentations, a novel reward learning framework that leverages action-free videos with minimal supervision. Specifically, REDS employs video demonstrations segmented into subtasks from diverse sources and treats these segments as ground-truth rewards. We train a dense reward function conditioned on video segments and their corresponding subtasks to ensure alignment with ground-truth reward signals by minimizing the Equivalent-Policy Invariant Comparison distance. Additionally, we employ contrastive learning objectives to align video representations with subtasks, ensuring precise subtask inference during online interactions. Our experiments show that REDS significantly outperforms baseline methods on complex robotic manipulation tasks in Meta-World and more challenging real-world tasks, such as furniture assembly in FurnitureBench, with minimal human intervention. Moreover, REDS facilitates generalization to unseen tasks and robot embodiments, highlighting its potential for scalable deployment in diverse environments.