SAMPO: Visual Preference Optimization for Intent-Aware Segmentation with Vision Foundation Models

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual foundation models (e.g., SAM) suffer from an “intention gap” in prompt-based segmentation: they respond only to explicit prompts and fail to capture users’ implicit high-level semantic intent—particularly causing under-segmentation of dense, homogeneous objects (e.g., nuclei). To address this, we propose SAMPO—the first language-model-free visual preference optimization framework tailored for dense-object segmentation. SAMPO employs contrastive learning to model pairwise image-mask preferences derived from sparse user interactions, enabling the model to implicitly learn category-level semantic features. This approach significantly reduces reliance on dense annotations and auxiliary prompt generators, achieving efficient intention alignment. Evaluated on three medical segmentation benchmarks, SAMPO achieves state-of-the-art performance: using only 10% of the training data, it surpasses the full-data baseline, yielding over a 9-percentage-point improvement on PanNuke-T2.

Technology Category

Application Category

📝 Abstract
Foundation models like Segment Anything Model (SAM) excel in promptable segmentation but suffer from an intent gap: they segment only explicitly prompted objects, failing to generalize to semantically related instances implicitly desired by users. This limitation is critical in domains with dense homogeneous objects (e.g., biomedical nuclei segmentation), where sparse visual prompts typically yield incomplete results, rendering dense annotations impractical due to prohibitive cost. To bridge this gap, we introduce SAMPO (Segment Anything Model with Preference Optimization), a novel framework that teaches visual foundation models to infer high-level categorical intent from sparse visual interactions. Unlike conventional pixel-level fine-tuning, SAMPO optimizes models to implicitly capture target-class characteristics through preference optimization. This approach, which operates without dependency on language models, enables robust multi-object segmentation even under sparse prompting and demonstrates superior data efficiency during fine-tuning. Validated on three medical segmentation tasks, SAMPO achieves state-of-the-art performance: on challenging tasks like PanNuke-T2, our method, when fine-tuned with only 10% of the training data, significantly outperforms all existing methods trained on the full 100% dataset, achieving an improvement of over 9 percentage points compared to the best baseline. Our work establishes a new paradigm for intent-aware alignment in visual foundation models, removing dependencies on auxiliary prompt generators or language-model-assisted preference learning.
Problem

Research questions and friction points this paper is trying to address.

Bridges intent gap in vision foundation models for segmentation
Enables robust multi-object segmentation with sparse prompts
Improves data efficiency in fine-tuning for medical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes models for intent inference
Uses sparse visual interactions
Enhances multi-object segmentation efficiency
🔎 Similar Papers
No similar papers found.
Y
Yonghuang Wu
Fudan University
W
Wenwen Zeng
Fudan University
Xuan Xie
Xuan Xie
Macau University of Science and Technology
Trustworthy LLMCyber Physical SystemNeural Network Verification
C
Chengqian Zhao
Fudan University
G
Guoqing Wu
Fudan University
Jinhua Yu
Jinhua Yu
Sun Yat-sen University
Remote sensing