🤖 AI Summary
This work addresses two core challenges in large-scale robotic manipulation datasets: (1) designing high-value diversity dimensions to enhance data utility, and (2) efficiently retrieving task-aligned demonstrations from existing datasets. To this end, we introduce a programmable data generation framework that explicitly models controllable diversity variables—including camera pose, object categories, and spatial layout. Our analysis reveals, for the first time, that camera pose and spatial arrangement are critical determinants of both dataset diversity and task alignment. We further propose a task-oriented demonstration retrieval algorithm grounded in geometric-semantic joint alignment. Evaluated on real-world datasets including DROID, our method improves downstream policy performance by up to 70%. Crucially, insights and gains observed in simulation generalize successfully to physical robot platforms, demonstrating robust cross-domain transferability.
📝 Abstract
Imitation learning from large multi-task demonstration datasets has emerged as a promising path for building generally-capable robots. As a result, 1000s of hours have been spent on building such large-scale datasets around the globe. Despite the continuous growth of such efforts, we still lack a systematic understanding of what data should be collected to improve the utility of a robotics dataset and facilitate downstream policy learning. In this work, we conduct a large-scale dataset composition study to answer this question. We develop a data generation framework to procedurally emulate common sources of diversity in existing datasets (such as sensor placements and object types and arrangements), and use it to generate large-scale robot datasets with controlled compositions, enabling a suite of dataset composition studies that would be prohibitively expensive in the real world. We focus on two practical settings: (1) what types of diversity should be emphasized when future researchers collect large-scale datasets for robotics, and (2) how should current practitioners retrieve relevant demonstrations from existing datasets to maximize downstream policy performance on tasks of interest. Our study yields several critical insights -- for example, we find that camera poses and spatial arrangements are crucial dimensions for both diversity in collection and alignment in retrieval. In real-world robot learning settings, we find that not only do our insights from simulation carry over, but our retrieval strategies on existing datasets such as DROID allow us to consistently outperform existing training strategies by up to 70%. More results at https://robo-mimiclabs.github.io/