π€ AI Summary
Open-vocabulary segmentation significantly lags behind fully supervised methods due to the limitations of vision-language models, which provide only image-level supervision and suffer from semantic ambiguity in natural language. To address this, this work proposes a retrieval-augmented test-time adapter under a few-shot setting that integrates textual prompts with pixel-annotated support images. By leveraging a learnable query-wise cross-modal fusion mechanism, the method dynamically generates lightweight, image-specific classifiers. This approach supports continual expansion of the support set, effectively balancing open-vocabulary generalization with fine-grained segmentation requirements. Extensive experiments demonstrate that it substantially narrows the performance gap between zero-shot and fully supervised segmentation across multiple benchmarks.
π Abstract
Open-vocabulary segmentation (OVS) extends the zero-shot recognition capabilities of vision-language models (VLMs) to pixel-level prediction, enabling segmentation of arbitrary categories specified by text prompts. Despite recent progress, OVS lags behind fully supervised approaches due to two challenges: the coarse image-level supervision used to train VLMs and the semantic ambiguity of natural language. We address these limitations by introducing a few-shot setting that augments textual prompts with a support set of pixel-annotated images. Building on this, we propose a retrieval-augmented test-time adapter that learns a lightweight, per-image classifier by fusing textual and visual support features. Unlike prior methods relying on late, hand-crafted fusion, our approach performs learned, per-query fusion, achieving stronger synergy between modalities. The method supports continually expanding support sets, and applies to fine-grained tasks such as personalized segmentation. Experiments show that we significantly narrow the gap between zero-shot and supervised segmentation while preserving open-vocabulary ability.