Vision-Language Model for Object Detection and Segmentation: A Review and Evaluation

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The general perceptual capabilities of vision-language models (VLMs) in both closed-set and open-vocabulary object detection and segmentation remain poorly understood. Method: We establish a unified benchmark spanning 16 representative scenarios—8 detection and 8 segmentation tasks—to enable the first comprehensive, cross-task and cross-paradigm evaluation. We propose a three-tier fine-tuning granularity framework—zero-prediction, vision-only fine-tuning, and text-prompting—and assess mainstream VLMs (e.g., CLIP, Flamingo, KOSMOS) under diverse protocols including cross-domain generalization, few-shot learning, crowded scenes, and small-object detection. Contribution/Results: Experiments show that vision-only fine-tuning substantially improves closed-set detection performance, whereas text prompting excels in open-vocabulary segmentation generalization. Our analysis precisely delineates VLMs’ capability boundaries and identifies optimal adaptation pathways, providing empirically grounded guidelines and design principles for downstream task customization.

Technology Category

Application Category

📝 Abstract
Vision-Language Model (VLM) have gained widespread adoption in Open-Vocabulary (OV) object detection and segmentation tasks. Despite they have shown promise on OV-related tasks, their effectiveness in conventional vision tasks has thus far been unevaluated. In this work, we present the systematic review of VLM-based detection and segmentation, view VLM as the foundational model and conduct comprehensive evaluations across multiple downstream tasks for the first time: 1) The evaluation spans eight detection scenarios (closed-set detection, domain adaptation, crowded objects, etc.) and eight segmentation scenarios (few-shot, open-world, small object, etc.), revealing distinct performance advantages and limitations of various VLM architectures across tasks. 2) As for detection tasks, we evaluate VLMs under three finetuning granularities: extit{zero prediction}, extit{visual fine-tuning}, and extit{text prompt}, and further analyze how different finetuning strategies impact performance under varied task. 3) Based on empirical findings, we provide in-depth analysis of the correlations between task characteristics, model architectures, and training methodologies, offering insights for future VLM design. 4) We believe that this work shall be valuable to the pattern recognition experts working in the fields of computer vision, multimodal learning, and vision foundation models by introducing them to the problem, and familiarizing them with the current status of the progress while providing promising directions for future research. A project associated with this review and evaluation has been created at https://github.com/better-chao/perceptual_abilities_evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates VLMs in conventional vision tasks unassessed previously
Systematically reviews VLM-based detection and segmentation scenarios
Analyzes finetuning impacts on performance across varied tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates VLMs in conventional vision tasks
Analyzes finetuning strategies for detection tasks
Explores task-architecture-training methodology correlations
🔎 Similar Papers
No similar papers found.
Y
Yongchao Feng
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Y
Yajie Liu
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
S
Shuai Yang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Wenrui Cai
Wenrui Cai
State Key Laboratory of Virtual Reality Technology and System, Beihang University
Computer VisionVideo AnalysisLLMs
Jinqing Zhang
Jinqing Zhang
Beihang University
3D Object DetectionAutonomous Driving
Q
Qiqi Zhan
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Ziyue Huang
Ziyue Huang
The Hong Kong University of Science and Technology
H
Hongxi Yan
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Q
Qiao Wan
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Chenguang Liu
Chenguang Liu
Delft University of Technology
Stochastic optimizationStochastic differential equations
J
Junzhe Wang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
J
Jiahui Lv
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Z
Ziqi Liu
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
T
Tengyuan Shi
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Qingjie Liu
Qingjie Liu
Professor, School of Computer Science and Engineering, Beihang University
Computer Vision and Pattern Recognition
Yunhong Wang
Yunhong Wang
Professor, School of Computer Science and Engineering, Beihang University
BiometricsPattern RecognitionImage ProcessingComputer Vision