🤖 AI Summary
Current SAM-based visual referring segmentation methods suffer from unstable and non-robust predictions due to inadequate modeling capacity of prompt encoders, which often generate prompts near object boundaries—regions inherently prone to ambiguity and noise. To address this, we propose a probabilistic prompting mechanism and introduce the Variational Prompt Encoder (VPE), the first prompt encoder that explicitly models the joint probability distribution over multi-variable prompts, thereby avoiding sampling in unstable boundary regions. Our approach integrates variational autoencoding, SAM-adaptive fine-tuning, and vision-language reference guidance to achieve robust zero-shot segmentation. Extensive experiments demonstrate state-of-the-art performance on Pascal-5ⁱ and COCO-20ⁱ, with simultaneous improvements in segmentation accuracy and stability. These results validate that probabilistic prompt modeling is critical for enhancing generalization in open-world referring segmentation tasks.
📝 Abstract
The recent advancements in large foundation models have driven the success of open-set image segmentation, a task focused on segmenting objects beyond predefined categories. Among various prompt types (such as points, boxes, texts, and visual references), visual reference segmentation stands out for its unique flexibility and strong zero-shot capabilities. Recently, several SAM-based methods have made notable progress in this task by automatically generating prompts to guide SAM. However, these methods often generate prompts at object boundaries due to suboptimal prompt encoder, which results in instability and reduced robustness. In this work, we introduce ProSAM, a simple but effective method to address the stability challenges we identified in existing SAM-based visual reference segmentation approaches. By learning a variational prompt encoder to predict multivariate prompt distributions, ProSAM avoids generating prompts that lie in unstable regions, overcoming the instability caused by less robust prompts. Our approach consistently surpasses state-of-the-art methods on the Pascal-5$^i$ and COCO-20$^i$ datasets, providing a more robust solution for visual reference segmentation.