🤖 AI Summary
In robotic manipulation, expert demonstration data are scarce, costly to collect, and frequently contain failure trajectories—yet conventional imitation learning (IL) methods discard such failures, limiting dataset diversity and generalization. Method: This paper proposes a reward-free offline imitation learning paradigm centered on the Self-Supervised Data Filtering (SSDF) framework—the first to quantitatively assess trajectory segment quality and automatically extract high-quality segments from mixed (success/failure) demonstrations. SSDF integrates trajectory reconstruction, contrastive consistency modeling, and an unsupervised quality scoring mechanism, enabling plug-and-play enhancement of downstream policy learners (e.g., BC, BC-RNN). Contribution/Results: Evaluated on ManiSkill2 benchmarks and real-world Franka robot tasks, SSDF significantly improves task success rates. Crucially, it demonstrates that carefully curated failure data—previously deemed unusable—can substantially strengthen training set generalization, overcoming a fundamental limitation of traditional IL.
📝 Abstract
Improving data utilization, especially for imperfect data from task failures, is crucial for robotic manipulation due to the challenging, time-consuming, and expensive data collection process in the real world. Current imitation learning (IL) typically discards imperfect data, focusing solely on successful expert data. While reinforcement learning (RL) can learn from explorations and failures, the sim2real gap and its reliance on dense reward and online exploration make it difficult to apply effectively in real-world scenarios. In this work, we aim to conquer the challenge of leveraging imperfect data without the need for reward information to improve the model performance for robotic manipulation in an offline manner. Specifically, we introduce a Self-Supervised Data Filtering framework (SSDF) that combines expert and imperfect data to compute quality scores for failed trajectory segments. High-quality segments from the failed data are used to expand the training dataset. Then, the enhanced dataset can be used with any downstream policy learning method for robotic manipulation tasks. Extensive experiments on the ManiSkill2 benchmark built on the high-fidelity Sapien simulator and real-world robotic manipulation tasks using the Franka robot arm demonstrated that the SSDF can accurately expand the training dataset with high-quality imperfect data and improve the success rates for all robotic manipulation tasks.