🤖 AI Summary
Visual foundation models (e.g., SAM) suffer from an “intention gap” in prompt-based segmentation: they respond only to explicit prompts and fail to capture users’ implicit high-level semantic intent—particularly causing under-segmentation of dense, homogeneous objects (e.g., nuclei). To address this, we propose SAMPO—the first language-model-free visual preference optimization framework tailored for dense-object segmentation. SAMPO employs contrastive learning to model pairwise image-mask preferences derived from sparse user interactions, enabling the model to implicitly learn category-level semantic features. This approach significantly reduces reliance on dense annotations and auxiliary prompt generators, achieving efficient intention alignment. Evaluated on three medical segmentation benchmarks, SAMPO achieves state-of-the-art performance: using only 10% of the training data, it surpasses the full-data baseline, yielding over a 9-percentage-point improvement on PanNuke-T2.
📝 Abstract
Foundation models like Segment Anything Model (SAM) excel in promptable segmentation but suffer from an intent gap: they segment only explicitly prompted objects, failing to generalize to semantically related instances implicitly desired by users. This limitation is critical in domains with dense homogeneous objects (e.g., biomedical nuclei segmentation), where sparse visual prompts typically yield incomplete results, rendering dense annotations impractical due to prohibitive cost. To bridge this gap, we introduce SAMPO (Segment Anything Model with Preference Optimization), a novel framework that teaches visual foundation models to infer high-level categorical intent from sparse visual interactions. Unlike conventional pixel-level fine-tuning, SAMPO optimizes models to implicitly capture target-class characteristics through preference optimization. This approach, which operates without dependency on language models, enables robust multi-object segmentation even under sparse prompting and demonstrates superior data efficiency during fine-tuning. Validated on three medical segmentation tasks, SAMPO achieves state-of-the-art performance: on challenging tasks like PanNuke-T2, our method, when fine-tuned with only 10% of the training data, significantly outperforms all existing methods trained on the full 100% dataset, achieving an improvement of over 9 percentage points compared to the best baseline. Our work establishes a new paradigm for intent-aware alignment in visual foundation models, removing dependencies on auxiliary prompt generators or language-model-assisted preference learning.