ProSAM: Enhancing the Robustness of SAM-based Visual Reference Segmentation with Probabilistic Prompts

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current SAM-based visual referring segmentation methods suffer from unstable and non-robust predictions due to inadequate modeling capacity of prompt encoders, which often generate prompts near object boundaries—regions inherently prone to ambiguity and noise. To address this, we propose a probabilistic prompting mechanism and introduce the Variational Prompt Encoder (VPE), the first prompt encoder that explicitly models the joint probability distribution over multi-variable prompts, thereby avoiding sampling in unstable boundary regions. Our approach integrates variational autoencoding, SAM-adaptive fine-tuning, and vision-language reference guidance to achieve robust zero-shot segmentation. Extensive experiments demonstrate state-of-the-art performance on Pascal-5ⁱ and COCO-20ⁱ, with simultaneous improvements in segmentation accuracy and stability. These results validate that probabilistic prompt modeling is critical for enhancing generalization in open-world referring segmentation tasks.

Technology Category

Application Category

📝 Abstract
The recent advancements in large foundation models have driven the success of open-set image segmentation, a task focused on segmenting objects beyond predefined categories. Among various prompt types (such as points, boxes, texts, and visual references), visual reference segmentation stands out for its unique flexibility and strong zero-shot capabilities. Recently, several SAM-based methods have made notable progress in this task by automatically generating prompts to guide SAM. However, these methods often generate prompts at object boundaries due to suboptimal prompt encoder, which results in instability and reduced robustness. In this work, we introduce ProSAM, a simple but effective method to address the stability challenges we identified in existing SAM-based visual reference segmentation approaches. By learning a variational prompt encoder to predict multivariate prompt distributions, ProSAM avoids generating prompts that lie in unstable regions, overcoming the instability caused by less robust prompts. Our approach consistently surpasses state-of-the-art methods on the Pascal-5$^i$ and COCO-20$^i$ datasets, providing a more robust solution for visual reference segmentation.
Problem

Research questions and friction points this paper is trying to address.

Improving robustness in SAM-based visual reference segmentation
Addressing instability from boundary-generated prompts
Enhancing zero-shot segmentation with probabilistic prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses probabilistic prompts for robust segmentation
Learns variational prompt encoder for stability
Predicts multivariate prompt distributions effectively
🔎 Similar Papers
No similar papers found.