Robot Instance Segmentation with Few Annotations for Grasping

📅 2024-07-01
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address instance segmentation in robotic grasping under cluttered scenes, high object variability, and extremely scarce annotations (only 1% labeled), this paper proposes the first framework integrating semi-supervised learning (SSL) with learning-to-interact (LTI). Our method generates pseudo-temporal sequences from single-frame unlabeled images via visual consistency modeling and enables continual model evolution through self-supervised feature alignment—eliminating reliance on extensive manual labeling. On ARMBench, it achieves AP₅₀ = 84.89 under semi-supervised settings, significantly surpassing the fully supervised baseline (72.0); under full supervision, it reaches AP₅₀ = 86.37—outperforming prior SOTA by ~20%. Key contributions include: (i) a weakly supervised representation learning paradigm driven by pseudo-sequence modeling, and (ii) an online evolutionary capability tailored for real-world robotic deployment.

Technology Category

Application Category

📝 Abstract
The ability of robots to manipulate objects relies heavily on their aptitude for visual perception. In domains characterized by cluttered scenes and high object variability, most methods call for vast labeled datasets, laboriously hand-annotated, with the aim of training capable models. Once deployed, the challenge of generalizing to unfamiliar objects implies that the model must evolve alongside its domain. To address this, we propose a novel framework that combines Semi-Supervised Learning (SSL) with Learning Through Interaction (LTI), allowing a model to learn by observing scene alterations and leverage visual consistency despite temporal gaps without requiring curated data of interaction sequences. As a result, our approach exploits partially annotated data through self-supervision and incorporates temporal context using pseudo-sequences generated from unlabeled still images. We validate our method on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves state-of-the-art performance. Notably, on ARMBench, we attain an $ ext{AP}_{50}$ of $86.37$, almost a $20%$ improvement over existing work, and obtain remarkable results in scenarios with extremely low annotation, achieving an $ ext{AP}_{50}$ score of $84.89$ with just $1 %$ of annotated data compared to $72$ presented in ARMBench on the fully annotated counterpart.
Problem

Research questions and friction points this paper is trying to address.

Robot instance segmentation
Few annotations
Grasping manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-Supervised Learning integration
Learning Through Interaction method
Pseudo-sequences from unlabeled images
🔎 Similar Papers
No similar papers found.