🤖 AI Summary
Generative vision-language models (VLMs) lack spatially dense cross-modal alignment between visual and linguistic representations, hindering open-vocabulary zero-shot segmentation.
Method: We propose a novel dense alignment paradigm that leverages generative VLMs to automatically produce fine-grained synthetic image descriptions, which serve as weak supervision to guide pixel-level vision–language embedding alignment—without requiring manually annotated masks. Our approach unifies high-level semantic understanding and precise spatial localization within a joint representation learning and zero-shot transfer framework.
Contribution/Results: The method achieves state-of-the-art performance on major open-vocabulary zero-shot segmentation benchmarks, including Pascal VOC and COCO. It demonstrates superior data efficiency and improved model scalability compared to prior approaches, establishing a new paradigm for modality-agnostic, mask-free dense alignment in generative VLMs.
📝 Abstract
Generative vision-language models (VLMs) exhibit strong high-level image understanding but lack spatially dense alignment between vision and language modalities, as our findings indicate. Orthogonal to advancements in generative VLMs, another line of research has focused on representation learning for vision-language alignment, targeting zero-shot inference for dense tasks like segmentation. In this work, we bridge these two directions by densely aligning images with synthetic descriptions generated by VLMs. Synthetic captions are inexpensive, scalable, and easy to generate, making them an excellent source of high-level semantic understanding for dense alignment methods. Empirically, our approach outperforms prior work on standard zero-shot open-vocabulary segmentation benchmarks/datasets, while also being more data-efficient.