AugInsert: Learning Robust Visual-Force Policies via Data Augmentation for Object Assembly Tasks

📅 2024-10-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high robustness requirements of hole-in-peg assembly tasks in unstructured environments, this paper evaluates the generalization capability of multimodal (vision + force-torque + proprioceptive) policies during the contact phase. We propose an online multimodal data augmentation framework tailored for contact-rich manipulation, enabling learning of robust visuo-haptic coordination policies in a bimanual simulation environment from only a small number of human demonstrations. Crucially, we provide the first systematic empirical validation of the critical role of force-torque signals in enhancing robustness against physical perturbations—particularly grasp pose deviations. Experiments demonstrate substantial improvements in generalization across unseen grasp poses, object geometries, scene appearances, and sensor noise. Notably, assembly success rates increase markedly under physical perturbations, confirming the efficacy of our approach in real-world-relevant conditions.

Technology Category

Application Category

📝 Abstract
This paper primarily focuses on learning robust visual-force policies in the context of high-precision object assembly tasks. Specifically, we focus on the contact phase of the assembly task where both objects (peg and hole) have made contact and the objective lies in maneuvering the objects to complete the assembly. Moreover, we aim to learn contact-rich manipulation policies with multisensory inputs on limited expert data by expanding human demonstrations via online data augmentation. We develop a simulation environment with a dual-arm robot manipulator to evaluate the effect of augmented expert demonstration data. Our focus is on evaluating the robustness of our model with respect to certain task variations: grasp pose, peg/hole shape, object body shape, scene appearance, camera pose, and force-torque/proprioception noise. We show that our proposed data augmentation method helps in learning a multisensory manipulation policy that is robust to unseen instances of these variations, particularly physical variations such as grasp pose. Additionally, our ablative studies show the significant contribution of force-torque data to the robustness of our model. For additional experiments and qualitative results, we refer to the project webpage at https://bit.ly/47skWXH .
Problem

Research questions and friction points this paper is trying to address.

Assessing robustness of multisensory policies in peg-in-hole assembly tasks
Identifying generalization challenges in object assembly tasks
Evaluating effectiveness of multisensory data augmentation for robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Perceiver IO for multisensory policy learning
Applies factor-based robustness evaluation framework
Employs multisensory data augmentation technique
🔎 Similar Papers
No similar papers found.
R
Ryan Diaz
Department of Computer Science and Engineering, University of Minnesota, Twin Cities
A
Adam Imdieke
Department of Computer Science and Engineering, University of Minnesota, Twin Cities
V
Vivek Veeriah
Google DeepMind
Karthik Desingh
Karthik Desingh
Assistant Professor, University of Minnesota
RoboticsComputer VisionMachine Learning