🤖 AI Summary
This work addresses the challenge of unreliable 3D instance segmentation in cluttered scenes—where occlusion, sparse viewpoints, and noisy masks hinder language-guided robotic grasping—by proposing a zero-shot 3D instance segmentation method. It uniquely treats noisy masks as informative cues to construct a semantics-driven hierarchical instance tree. By integrating cross-view grouping, a conditional replacement strategy, and a consistency-aware update mechanism, the approach maintains robust instance correspondences using only a single user interaction image. The method further incorporates open-vocabulary semantic embeddings to enable natural language–guided object selection. Experiments demonstrate its superiority in highly cluttered environments, achieving an AP@25 of 61.66—more than 2.2 times higher than existing methods—and significantly outperforming MaskClustering with only four views compared to MaskClustering’s eight.
📝 Abstract
Reliable 3D instance segmentation is fundamental to language-grounded robotic manipulation. Its critical application lies in cluttered environments, where occlusions, limited viewpoints, and noisy masks degrade perception. To address these challenges, we present Clutt3R-Seg, a zero-shot pipeline for robust 3D instance segmentation for language-grounded grasping in cluttered scenes. Our key idea is to introduce a hierarchical instance tree of semantic cues. Unlike prior approaches that attempt to refine noisy masks, our method leverages them as informative cues: through cross-view grouping and conditional substitution, the tree suppresses over- and under-segmentation, yielding view-consistent masks and robust 3D instances. Each instance is enriched with open-vocabulary semantic embeddings, enabling accurate target selection from natural language instructions. To handle scene changes during multi-stage tasks, we further introduce a consistency-aware update that preserves instance correspondences from only a single post-interaction image, allowing efficient adaptation without rescanning. Clutt3R-Seg is evaluated on both synthetic and real-world datasets, and validated on a real robot. Across all settings, it consistently outperforms state-of-the-art baselines in cluttered and sparse-view scenarios. Even on the most challenging heavy-clutter sequences, Clutt3R-Seg achieves an AP@25 of 61.66, over 2.2x higher than baselines, and with only four input views it surpasses MaskClustering with eight views by more than 2x. The code is available at: https://github.com/jeonghonoh/clutt3r-seg.