🤖 AI Summary
To address the high robustness requirements of hole-in-peg assembly tasks in unstructured environments, this paper evaluates the generalization capability of multimodal (vision + force-torque + proprioceptive) policies during the contact phase. We propose an online multimodal data augmentation framework tailored for contact-rich manipulation, enabling learning of robust visuo-haptic coordination policies in a bimanual simulation environment from only a small number of human demonstrations. Crucially, we provide the first systematic empirical validation of the critical role of force-torque signals in enhancing robustness against physical perturbations—particularly grasp pose deviations. Experiments demonstrate substantial improvements in generalization across unseen grasp poses, object geometries, scene appearances, and sensor noise. Notably, assembly success rates increase markedly under physical perturbations, confirming the efficacy of our approach in real-world-relevant conditions.
📝 Abstract
This paper primarily focuses on learning robust visual-force policies in the context of high-precision object assembly tasks. Specifically, we focus on the contact phase of the assembly task where both objects (peg and hole) have made contact and the objective lies in maneuvering the objects to complete the assembly. Moreover, we aim to learn contact-rich manipulation policies with multisensory inputs on limited expert data by expanding human demonstrations via online data augmentation. We develop a simulation environment with a dual-arm robot manipulator to evaluate the effect of augmented expert demonstration data. Our focus is on evaluating the robustness of our model with respect to certain task variations: grasp pose, peg/hole shape, object body shape, scene appearance, camera pose, and force-torque/proprioception noise. We show that our proposed data augmentation method helps in learning a multisensory manipulation policy that is robust to unseen instances of these variations, particularly physical variations such as grasp pose. Additionally, our ablative studies show the significant contribution of force-torque data to the robustness of our model. For additional experiments and qualitative results, we refer to the project webpage at https://bit.ly/47skWXH .