Resampling Benchmark for Efficient Comprehensive Evaluation of Large Vision-Language Models

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language model (VLM) evaluation relies on exhaustive multi-benchmark testing, which incurs high computational cost, suffers from incomplete coverage, and is vulnerable to dataset bias. To address these limitations, we propose an efficient VLM evaluation protocol. First, we introduce Farthest Point Sampling (FPS) into benchmark construction—leveraging multi-benchmark categorical analysis and correlation validation to automatically select a representative subset. Second, we design a bias diagnosis module to quantify and mitigate dataset shift. Experiments demonstrate that our method achieves >0.96 Spearman correlation with full-benchmark evaluation using only ~1% of the samples, substantially reducing computational overhead while enhancing evaluation robustness and generalizability. Our approach establishes a new paradigm for lightweight, reliable, and interpretable VLM assessment.

Technology Category

Application Category

📝 Abstract
We propose an efficient evaluation protocol for large vision-language models (VLMs). Given their broad knowledge and reasoning capabilities, multiple benchmarks are needed for comprehensive assessment, making evaluation computationally expensive. To improve efficiency, we construct a subset that yields results comparable to full benchmark evaluations. Our benchmark classification experiments reveal that no single benchmark fully covers all challenges. We then introduce a subset construction method using farthest point sampling (FPS). Our experiments show that FPS-based benchmarks maintain a strong correlation (>0.96) with full evaluations while using only ~1% of the data. Additionally, applying FPS to an existing benchmark improves correlation with overall evaluation results, suggesting its potential to reduce unintended dataset biases.
Problem

Research questions and friction points this paper is trying to address.

Efficient evaluation of large vision-language models
Subset construction for comprehensive benchmark assessment
Reducing dataset biases with farthest point sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subset construction via farthest point sampling
Efficient evaluation with minimal data usage
Reduces unintended dataset biases effectively
🔎 Similar Papers
No similar papers found.