Mitigating Objectness Bias and Region-to-Text Misalignment for Open-Vocabulary Panoptic Segmentation

πŸ“… 2026-03-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of open-vocabulary panoptic segmentation, which suffers from mask selection bias and insufficient region-level understanding by vision-language models. To jointly mitigate objectness bias and region-text misalignment, the authors propose the OVRCOAT framework, featuring two key components: CLIP-conditioned Objectness Adjustment (COAT) to preserve high-quality masks of unknown categories, and Open-Vocabulary Region-text Refinement (OVR) to enhance semantic alignment at the regional level. Notably, OVRCOAT achieves state-of-the-art performance with only lightweight modules and without costly fine-tuning of large vision-language models. Experimental results demonstrate consistent improvements across multiple benchmarks, yielding PQ gains of 5.5%, 7.1%, and 3.0% on ADE20K, Mapillary Vistas, and Cityscapes, respectively.

Technology Category

Application Category

πŸ“ Abstract
Open-vocabulary panoptic segmentation remains hindered by two coupled issues: (i) mask selection bias, where objectness heads trained on closed vocabularies suppress masks of categories not observed in training, and (ii) limited regional understanding in vision-language models such as CLIP, which were optimized for global image classification rather than localized segmentation. We introduce OVRCOAT, a simple, modular framework that tackles both. First, a CLIP-conditioned objectness adjustment (COAT) updates background/foreground probabilities, preserving high-quality masks for out-of-vocabulary objects. Second, an open-vocabulary mask-to-text refinement (OVR) strengthens CLIP's region-level alignment to improve classification of both seen and unseen classes with markedly lower memory cost than prior fine-tuning schemes. The two components combine to jointly improve objectness estimation and mask recognition, yielding consistent panoptic gains. Despite its simplicity, OVRCOAT sets a new state of the art on ADE20K (+5.5% PQ) and delivers clear gains on Mapillary Vistas and Cityscapes (+7.1% and +3% PQ, respectively). The code is available at: https://github.com/nickormushev/OVRCOAT
Problem

Research questions and friction points this paper is trying to address.

open-vocabulary panoptic segmentation
objectness bias
region-to-text misalignment
vision-language models
mask selection bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

open-vocabulary panoptic segmentation
objectness bias mitigation
region-to-text alignment
CLIP-conditioned adjustment
mask-to-text refinement
πŸ”Ž Similar Papers