The Missing Point in Vision Transformers for Universal Image Segmentation

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In generic image segmentation, Vision Transformer (ViT)-based methods excel at mask generation but suffer from limited classification accuracy due to ambiguous boundaries and class imbalance. To address this, we propose ViT-P, a two-stage framework: (1) a class-agnostic mask proposal stage, followed by (2) a point-driven classification stage that anchors predictions at mask centroids and employs a lightweight, pretrained-ViT-compatible adapter—requiring no fine-tuning of the ViT backbone. Our key contributions are: (1) the first centroid-guided decoupled classification paradigm; (2) a pretraining-free adapter design, enabling plug-and-play integration with any ViT; and (3) support for coarse-grained supervision (e.g., bounding boxes), significantly reducing annotation cost. ViT-P achieves state-of-the-art performance: 54.0 Panoptic Quality (PQ) on ADE20K panoptic segmentation, and 87.4% and 63.6% mIoU on Cityscapes and ADE20K semantic segmentation, respectively. Code and models are publicly available.

Technology Category

Application Category

📝 Abstract
Image segmentation remains a challenging task in computer vision, demanding robust mask generation and precise classification. Recent mask-based approaches yield high-quality masks by capturing global context. However, accurately classifying these masks, especially in the presence of ambiguous boundaries and imbalanced class distributions, remains an open challenge. In this work, we introduce ViT-P, a novel two-stage segmentation framework that decouples mask generation from classification. The first stage employs a proposal generator to produce class-agnostic mask proposals, while the second stage utilizes a point-based classification model built on the Vision Transformer (ViT) to refine predictions by focusing on mask central points. ViT-P serves as a pre-training-free adapter, allowing the integration of various pre-trained vision transformers without modifying their architecture, ensuring adaptability to dense prediction tasks. Furthermore, we demonstrate that coarse and bounding box annotations can effectively enhance classification without requiring additional training on fine annotation datasets, reducing annotation costs while maintaining strong performance. Extensive experiments across COCO, ADE20K, and Cityscapes datasets validate the effectiveness of ViT-P, achieving state-of-the-art results with 54.0 PQ on ADE20K panoptic segmentation, 87.4 mIoU on Cityscapes semantic segmentation, and 63.6 mIoU on ADE20K semantic segmentation. The code and pretrained models are available at: https://github.com/sajjad-sh33/ViT-P}{https://github.com/sajjad-sh33/ViT-P.
Problem

Research questions and friction points this paper is trying to address.

Improving mask classification accuracy in image segmentation
Decoupling mask generation and classification for better performance
Reducing annotation costs while maintaining segmentation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework decouples mask and classification
Point-based classification using Vision Transformer
Pre-training-free adapter for various ViTs
🔎 Similar Papers
No similar papers found.
S
Sajjad Shahabodini
M
Mobina Mansoori
Farnoush Bayatmakou
Farnoush Bayatmakou
Concordia University
Machine LearningMedical Image ProcessingText Mining
J
J. Abouei
K
Konstantinos N. Plataniotis
A
Arash Mohammadi