🤖 AI Summary
The general perceptual capabilities of vision-language models (VLMs) in both closed-set and open-vocabulary object detection and segmentation remain poorly understood. Method: We establish a unified benchmark spanning 16 representative scenarios—8 detection and 8 segmentation tasks—to enable the first comprehensive, cross-task and cross-paradigm evaluation. We propose a three-tier fine-tuning granularity framework—zero-prediction, vision-only fine-tuning, and text-prompting—and assess mainstream VLMs (e.g., CLIP, Flamingo, KOSMOS) under diverse protocols including cross-domain generalization, few-shot learning, crowded scenes, and small-object detection. Contribution/Results: Experiments show that vision-only fine-tuning substantially improves closed-set detection performance, whereas text prompting excels in open-vocabulary segmentation generalization. Our analysis precisely delineates VLMs’ capability boundaries and identifies optimal adaptation pathways, providing empirically grounded guidelines and design principles for downstream task customization.
📝 Abstract
Vision-Language Model (VLM) have gained widespread adoption in Open-Vocabulary (OV) object detection and segmentation tasks. Despite they have shown promise on OV-related tasks, their effectiveness in conventional vision tasks has thus far been unevaluated. In this work, we present the systematic review of VLM-based detection and segmentation, view VLM as the foundational model and conduct comprehensive evaluations across multiple downstream tasks for the first time: 1) The evaluation spans eight detection scenarios (closed-set detection, domain adaptation, crowded objects, etc.) and eight segmentation scenarios (few-shot, open-world, small object, etc.), revealing distinct performance advantages and limitations of various VLM architectures across tasks. 2) As for detection tasks, we evaluate VLMs under three finetuning granularities: extit{zero prediction}, extit{visual fine-tuning}, and extit{text prompt}, and further analyze how different finetuning strategies impact performance under varied task. 3) Based on empirical findings, we provide in-depth analysis of the correlations between task characteristics, model architectures, and training methodologies, offering insights for future VLM design. 4) We believe that this work shall be valuable to the pattern recognition experts working in the fields of computer vision, multimodal learning, and vision foundation models by introducing them to the problem, and familiarizing them with the current status of the progress while providing promising directions for future research. A project associated with this review and evaluation has been created at https://github.com/better-chao/perceptual_abilities_evaluation.