Segment Concealed Objects with Incomplete Supervision.

📅 2025-06-03
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenges of scarce weak/semi-supervised annotations and high visual similarity between foreground objects and background in Invisible Semantic Contour Object Segmentation (ISCOS), this paper proposes SEE, a unified Mean-Teacher framework. Methodologically, SEE (1) leverages the Segment Anything Model (SAM) to generate high-confidence pseudo-labels and introduces a filtering-storage-resharing mechanism to iteratively refine their quality; and (2) incorporates a hybrid-granularity feature grouping module to enhance cross-scale feature consistency and segmentation robustness. Extensive experiments demonstrate that SEE achieves state-of-the-art performance across multiple ISCOS benchmarks. The framework is modular and plug-and-play, significantly boosting the performance of existing ISCOS models without architectural modification. To foster reproducibility and community advancement, the source code will be publicly released.

Technology Category

Application Category

📝 Abstract
Incompletely-Supervised Concealed Object Segmentation (ISCOS) involves segmenting objects that seamlessly blend into their surrounding environments, utilizing incompletely annotated data, such as weak and semi-annotations, for model training. This task remains highly challenging due to (1) the limited supervision provided by the incompletely annotated training data, and (2) the difficulty of distinguishing concealed objects from the background, which arises from the intrinsic similarities in concealed scenarios. In this paper, we introduce the first unified method for ISCOS to address these challenges. To tackle the issue of incomplete supervision, we propose a unified mean-teacher framework, SEE, that leverages the vision foundation model, "Segment Anything Model (SAM)", to generate pseudo-labels using coarse masks produced by the teacher model as prompts. To mitigate the effect of low-quality segmentation masks, we introduce a series of strategies for pseudo-label generation, storage, and supervision. These strategies aim to produce informative pseudo-labels, store the best pseudo-labels generated, and select the most reliable components to guide the student model, thereby ensuring robust network training. Additionally, to tackle the issue of intrinsic similarity, we design a hybrid-granularity feature grouping module that groups features at different granularities and aggregates these results. By clustering similar features, this module promotes segmentation coherence, facilitating more complete segmentation for both single-object and multiple-object images. We validate the effectiveness of our approach across multiple ISCOS tasks, and experimental results demonstrate that our method achieves state-of-the-art performance. Furthermore, the SEE framework can serve as a plug-and-play solution, enhancing the performance of existing models for ISCOS tasks. The code will be released.
Problem

Research questions and friction points this paper is trying to address.

Segment concealed objects with incomplete supervision
Address intrinsic similarity between objects and background
Improve pseudo-label quality for robust network training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified mean-teacher framework leveraging SAM
Hybrid-granularity feature grouping module
Pseudo-label generation and storage strategies
🔎 Similar Papers
No similar papers found.