🤖 AI Summary
Existing open-vocabulary semantic segmentation methods rely on manually predefined category names, creating a circular dependency in real-world scenarios: categories must be known *a priori* to perform segmentation. This work introduces the first vocabulary-free semantic segmentation paradigm—requiring no pre-specified lexicon—and enabling fully automatic, pixel-level segmentation via multimodal vision-language models that jointly perceive scene content and generate accurate object descriptions. Our method integrates adaptive text generation, fine-grained cross-modal alignment, and robust text encoding. Crucially, we are the first to reveal the sensitivity of text encoders to generated descriptions and characterize their role in inducing false negatives. Extensive experiments on multiple benchmarks demonstrate significant improvements in segmentation accuracy, validating the strong generalization capability and practical utility of our end-to-end, fully automated pipeline in open-world settings.
📝 Abstract
Open-vocabulary semantic segmentation enables models to identify novel object categories beyond their training data. While this flexibility represents a significant advancement, current approaches still rely on manually specified class names as input, creating an inherent bottleneck in real-world applications. This work proposes a Vocabulary-Free Semantic Segmentation pipeline, eliminating the need for predefined class vocabularies. Specifically, we address the chicken-and-egg problem where users need knowledge of all potential objects within a scene to identify them, yet the purpose of segmentation is often to discover these objects. The proposed approach leverages Vision-Language Models to automatically recognize objects and generate appropriate class names, aiming to solve the challenge of class specification and naming quality. Through extensive experiments on several public datasets, we highlight the crucial role of the text encoder in model performance, particularly when the image text classes are paired with generated descriptions. Despite the challenges introduced by the sensitivity of the segmentation text encoder to false negatives within the class tagging process, which adds complexity to the task, we demonstrate that our fully automated pipeline significantly enhances vocabulary-free segmentation accuracy across diverse real-world scenarios.