RobotSeg: A Model and Dataset for Segmenting Robots in Image and Video

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robot vision segmentation for robots faces significant challenges due to morphological diversity, ambiguous appearance, and dynamic structural changes. To address these, we propose a general-purpose robotic visual understanding framework: (1) a structure-enhanced memory association module built upon SAM 2 for explicit structural awareness of articulated robots; (2) a lightweight robot-specific prompt generator enabling automatic and robust segmentation initialization; and (3) a low-annotation-cost weakly supervised training strategy to minimize human intervention. We further introduce VRS—the first large-scale video-based robotic segmentation dataset—containing 2.8k videos. Experiments demonstrate state-of-the-art performance on both image and video segmentation benchmarks, with substantially improved cross-morphology and cross-scenario generalization, as well as enhanced deployment efficiency. The framework supports real-world applications including visual servoing, safety monitoring, and simulation-to-reality transfer.

Technology Category

Application Category

📝 Abstract
Accurate robot segmentation is a fundamental capability for robotic perception. It enables precise visual servoing for VLA systems, scalable robot-centric data augmentation, accurate real-to-sim transfer, and reliable safety monitoring in dynamic human-robot environments. Despite the strong capabilities of modern segmentation models, surprisingly it remains challenging to segment robots. This is due to robot embodiment diversity, appearance ambiguity, structural complexity, and rapid shape changes. Embracing these challenges, we introduce RobotSeg, a foundation model for robot segmentation in image and video. RobotSeg is built upon the versatile SAM 2 foundation model but addresses its three limitations for robot segmentation, namely the lack of adaptation to articulated robots, reliance on manual prompts, and the need for per-frame training mask annotations, by introducing a structure-enhanced memory associator, a robot prompt generator, and a label-efficient training strategy. These innovations collectively enable a structure-aware, automatic, and label-efficient solution. We further construct the video robot segmentation (VRS) dataset comprising over 2.8k videos (138k frames) with diverse robot embodiments and environments. Extensive experiments demonstrate that RobotSeg achieves state-of-the-art performance on both images and videos, establishing a strong foundation for future advances in robot perception.
Problem

Research questions and friction points this paper is trying to address.

Segmenting diverse robots in images and videos
Overcoming robot appearance ambiguity and structural complexity
Enabling automatic and label-efficient robot segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-enhanced memory associator for articulated robots
Robot prompt generator for automatic segmentation
Label-efficient training strategy reducing annotation needs
🔎 Similar Papers
No similar papers found.