🤖 AI Summary
Open-vocabulary segmentation is hindered by standard supervised paradigms, limiting generalization to unseen categories; meanwhile, existing vision-language pretraining approaches suffer from opaque transfer mechanisms, resulting in prolonged performance stagnation. To address this, we propose the first interpretable oracle analysis framework, introducing a ground-truth-guided bottleneck disentanglement module that systematically diagnoses fundamental deficiencies across three axes: cross-modal alignment, text-guided segmentation, and visual feature adaptation. Through large-scale, multi-model ablation studies, we quantitatively identify the inherent performance ceiling and its primary causes for the first time. Our work establishes a reproducible evaluation benchmark and shifts the research paradigm from empirical tuning to mechanism-driven design—thereby enabling controllable, principled optimization of open-vocabulary segmentation.
📝 Abstract
Standard segmentation setups are unable to deliver models that can recognize concepts outside the training taxonomy. Open-vocabulary approaches promise to close this gap through language-image pretraining on billions of image-caption pairs. Unfortunately, we observe that the promise is not delivered due to several bottlenecks that have caused the performance to plateau for almost two years. This paper proposes novel oracle components that identify and decouple these bottlenecks by taking advantage of the groundtruth information. The presented validation experiments deliver important empirical findings that provide a deeper insight into the failures of open-vocabulary models and suggest prominent approaches to unlock the future research.