Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics

📅 2024-05-30
🏛️ Trans. Mach. Learn. Res.
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
To address the reliance on large-scale, platform-specific language-annotated demonstrations in robotic language-conditioned task learning, this work proposes a novel paradigm that decouples “task goals” from “execution modalities.” Methodologically, we introduce a cross-platform transferable video-language reward model: a temporal video-language multimodal encoder is trained via a hybrid objective combining contrastive learning with temporal ranking loss, yielding a discriminative reward function. Policy learning employs a disentangled Actor-Critic architecture, requiring only zero-shot visual observations from the target platform—no language-labeled demonstrations on the target robot are needed for zero-shot reward function transfer. Evaluated on Meta-World, our approach achieves a 2× improvement in sample efficiency over sparse-reward baselines and significantly outperforms existing language reward models based on binary classification, static images, or temporally agnostic representations.

Technology Category

Application Category

📝 Abstract
Natural language is often the easiest and most convenient modality for humans to specify tasks for robots. However, learning to ground language to behavior typically requires impractical amounts of diverse, language-annotated demonstrations collected on each target robot. In this work, we aim to separate the problem of what to accomplish from how to accomplish it, as the former can benefit from substantial amounts of external observation-only data, and only the latter depends on a specific robot embodiment. To this end, we propose Video-Language Critic, a reward model that can be trained on readily available cross-embodiment data using contrastive learning and a temporal ranking objective, and use it to score behavior traces from a separate actor. When trained on Open X-Embodiment data, our reward model enables 2x more sample-efficient policy training on Meta-World tasks than a sparse reward only, despite a significant domain gap. Using in-domain data but in a challenging task generalization setting on Meta-World, we further demonstrate more sample-efficient training than is possible with prior language-conditioned reward models that are either trained with binary classification, use static images, or do not leverage the temporal information present in video data.
Problem

Research questions and friction points this paper is trying to address.

Grounding natural language to robot behavior
Reducing need for annotated robot demonstrations
Learning transferable reward functions from video data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video-language contrastive learning for rewards
Temporal ranking objective on cross-embodiment data
Transferable reward functions from observation-only data
🔎 Similar Papers
No similar papers found.