Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address domain shift arising from morphological discrepancies when transferring human video pre-trained models to robotic manipulation tasks, this paper proposes an unsupervised cross-domain alignment method leveraging paired human–robot videos. The core innovation is a parameter-efficient human–robot contrastive alignment loss, which aligns visual-semantic features across domains via contrastive learning—achieving domain-adaptive visual representation transfer without any labeled data for the target robotic domain. Unlike prior approaches requiring task-specific supervision or domain adaptation labels, our method operates fully unsupervised and enables language-conditioned generalization across multiple manipulation tasks. Evaluated on 20 simulated and 5 real-robot manipulation benchmarks, it improves average success rate by over 7% relative to state-of-the-art vision pre-training baselines, demonstrating significant gains in zero-shot cross-domain transfer performance.

Technology Category

Application Category

📝 Abstract
Learning generalizable visual representations across different embodied environments is essential for effective robotic manipulation in real-world scenarios. However, the limited scale and diversity of robot demonstration data pose a significant challenge. Recent research has explored leveraging large-scale human activity data for pre-training, but the substantial morphological differences between humans and robots introduce a significant human-robot domain discrepancy, hindering the generalization of these models to downstream manipulation tasks. To overcome this, we propose a novel adaptation paradigm that leverages readily available paired human-robot video data to bridge the domain gap. Our method employs a human-robot contrastive alignment loss to align the semantics of human and robot videos, adapting pre-trained models to the robot domain in a parameter-efficient manner. Experiments on 20 simulated tasks across two different benchmarks and five real-world tasks demonstrate significant improvements. These results span both single-task and language-conditioned multi-task settings, evaluated using two different pre-trained models. Compared to existing pre-trained models, our adaptation method improves the average success rate by over 7% across multiple tasks on both simulated benchmarks and real-world evaluations.
Problem

Research questions and friction points this paper is trying to address.

Bridging human-robot domain gap in visual pre-training
Improving generalization for robotic manipulation tasks
Aligning human-robot video semantics efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-robot contrastive alignment loss
Parameter-efficient domain adaptation
Paired human-robot video data