Segment Anyword: Mask Prompt Inversion for Open-Set Grounded Segmentation

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-vocabulary image segmentation methods rely heavily on extensive training or fine-tuning, struggling to maintain segmentation coherence and consistency across diverse textual expressions. This paper introduces the first zero-training, language-guided open-vocabulary segmentation framework. It leverages frozen diffusion models (e.g., Stable Diffusion) to generate initial mask prompts from word-level cross-attention maps; incorporates dependency parsing to regularize visual prompts; and proposes a mask prompt inversion mechanism alongside linguistic-structure-driven visual embedding clustering to significantly enhance mask consistency and robustness. Evaluated on Pascal Context 59, gRefCOCO, and GranDf, our method achieves 52.5 mIoU (+6.8), 67.73 cIoU (+25.73), and 67.4 mIoU, respectively—outperforming state-of-the-art fine-tuning approaches and establishing the first high-accuracy, strongly generalizable zero-training grounded segmentation solution.

Technology Category

Application Category

📝 Abstract
Open-set image segmentation poses a significant challenge because existing methods often demand extensive training or fine-tuning and generally struggle to segment unified objects consistently across diverse text reference expressions. Motivated by this, we propose Segment Anyword, a novel training-free visual concept prompt learning approach for open-set language grounded segmentation that relies on token-level cross-attention maps from a frozen diffusion model to produce segmentation surrogates or mask prompts, which are then refined into targeted object masks. Initial prompts typically lack coherence and consistency as the complexity of the image-text increases, resulting in suboptimal mask fragments. To tackle this issue, we further introduce a novel linguistic-guided visual prompt regularization that binds and clusters visual prompts based on sentence dependency and syntactic structural information, enabling the extraction of robust, noise-tolerant mask prompts, and significant improvements in segmentation accuracy. The proposed approach is effective, generalizes across different open-set segmentation tasks, and achieves state-of-the-art results of 52.5 (+6.8 relative) mIoU on Pascal Context 59, 67.73 (+25.73 relative) cIoU on gRefCOCO, and 67.4 (+1.1 relative to fine-tuned methods) mIoU on GranDf, which is the most complex open-set grounded segmentation task in the field.
Problem

Research questions and friction points this paper is trying to address.

Open-set image segmentation lacks consistency across text expressions
Initial mask prompts are incoherent with complex image-text inputs
Existing methods require extensive training or fine-tuning for segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free visual concept prompt learning
Linguistic-guided visual prompt regularization
Token-level cross-attention maps for segmentation
🔎 Similar Papers
No similar papers found.
Zhihua Liu
Zhihua Liu
samsung advanced institute of technology, China lab
computer visionpattern recognition
A
Amrutha Saseendran
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, Cambridge, UK
L
Lei Tong
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, Cambridge, UK
Xilin He
Xilin He
MBZUAI, CUHK
Domain GeneralizationModel RobustnessDiffusion Model
F
Fariba Yousefi
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, Cambridge, UK
N
Nikolay Burlutskiy
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, Cambridge, UK
Dino Oglic
Dino Oglic
AstraZeneca Cambridge
Machine LearningKernel MethodsLearning TheoryRepresentation LearningDrug Design
Tom Diethe
Tom Diethe
AstraZeneca; University of Bristol
Machine LearningComputational BiologyDrug DevelopmentPrivacy Enhancing Technologies
Philip Teare
Philip Teare
AstraZeneca
medical imagingcomputer visionself-supervised learningimputationcounterfactual rendering
Huiyu Zhou
Huiyu Zhou
Professor of Machine Learning, University of Leicester, UK
Machine learningcomputer visionmedical image analysishuman-computer interface
C
Chen Jin
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, Cambridge, UK