Can We Predict Performance of Large Models across Vision-Language Tasks?

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How to efficiently evaluate large vision-language models (LVLMs) across diverse cross-modal tasks? Addressing the challenges of high evaluation cost and incomplete task coverage, this paper formulates LVLM evaluation as a probabilistic matrix completion problem with uncertainty estimation, constructing a sparse performance matrix (models × tasks). We propose an improved Markov Chain Monte Carlo (MCMC)-enhanced Probabilistic Matrix Factorization (PMF) method, enabling uncertainty-driven active evaluation and sparse-data augmentation. Experiments demonstrate that our approach significantly improves prediction accuracy and robustness under extremely sparse observations; its calibrated uncertainty estimates reliably prioritize unassessed tasks; and it provides a scalable, interpretable, and efficient evaluation framework for multi-model, multi-task scenarios.

Technology Category

Application Category

📝 Abstract
Evaluating large vision-language models (LVLMs) is very expensive, due to high computational cost and the wide variety of tasks. The good news is that if we already have some observed performance scores, we may be able to infer unknown ones. In this study, we propose a new framework for predicting unknown performance scores based on observed ones from other LVLMs or tasks. We first formulate the performance prediction as a matrix completion task. Specifically, we construct a sparse performance matrix $oldsymbol{R}$, where each entry $R_{mn}$ represents the performance score of the $m$-th model on the $n$-th dataset. By applying probabilistic matrix factorization (PMF) with Markov chain Monte Carlo (MCMC), we can complete the performance matrix, i.e., predict unknown scores. Additionally, we estimate the uncertainty of performance prediction based on MCMC. Practitioners can evaluate their models on untested tasks with higher uncertainty first, which quickly reduces the prediction errors. We further introduce several improvements to enhance PMF for scenarios with sparse observed performance scores. Our experiments demonstrate the accuracy of PMF in predicting unknown scores, the reliability of uncertainty estimates in ordering evaluations, and the effectiveness of our enhancements for handling sparse data.
Problem

Research questions and friction points this paper is trying to address.

Predicting performance scores of large vision-language models across tasks
Formulating performance prediction as a matrix completion problem
Enhancing probabilistic matrix factorization for sparse performance data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts LVLM performance via matrix completion
Uses PMF and MCMC for score estimation
Enhances PMF for sparse performance data
🔎 Similar Papers
No similar papers found.