SPDA-SAM: A Self-prompted Depth-Aware Segment Anything Model for Instance Segmentation

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional instance segmentation methods, which rely on manual prompts and lack depth cues, thereby struggling to accurately perceive spatial structures and object boundaries. To overcome these challenges, the authors propose SPDA-SAM, a novel framework that integrates self-prompting mechanisms and RGB-D fusion into the Segment Anything Model (SAM) for the first time. Specifically, a Semantic-Spatial Self-Prompting Module (SSSPM) enables automatic instance-level guidance without human intervention, while a Coarse-to-Fine RGB-D Fusion Module (C2FFM) effectively combines monocular depth estimates with visual features in a hierarchical manner. Extensive experiments demonstrate that SPDA-SAM significantly outperforms state-of-the-art methods across twelve benchmark datasets, achieving substantial improvements in both accuracy and robustness of instance segmentation.

Technology Category

Application Category

📝 Abstract
Recently, Segment Anything Model (SAM) has demonstrated strong generalizability in various instance segmentation tasks. However, its performance is severely dependent on the quality of manual prompts. In addition, the RGB images that instance segmentation methods normally use inherently lack depth information. As a result, the ability of these methods to perceive spatial structures and delineate object boundaries is hindered. To address these challenges, we propose a Self-prompted Depth-Aware SAM (SPDA-SAM) for instance segmentation. Specifically, we design a Semantic-Spatial Self-prompt Module (SSSPM) which extracts the semantic and spatial prompts from the image encoder and the mask decoder of SAM, respectively. Furthermore, we introduce a Coarse-to-Fine RGB-D Fusion Module (C2FFM), in which the features extracted from a monocular RGB image and the depth map estimated from it are fused. In particular, the structural information in the depth map is used to provide coarse-grained guidance to feature fusion, while local variations in depth are encoded in order to fuse fine-grained feature representations. To our knowledge, SAM has not been explored in such self-prompted and depth-aware manners. Experimental results demonstrate that our SPDA-SAM outperforms its state-of-the-art counterparts across twelve different data sets. These promising results should be due to the guidance of the self-prompts and the compensation for the spatial information loss by the coarse-to-fine RGB-D fusion operation.
Problem

Research questions and friction points this paper is trying to address.

instance segmentation
depth-aware
prompt dependency
spatial structure
RGB-D fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-prompting
Depth-aware
RGB-D fusion
Instance segmentation
Segment Anything Model