K-Sort Eval: Efficient Preference Evaluation for Visual Generation via Corrected VLM-as-a-Judge

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current evaluation methods for visual generative models, which either rely on costly and inefficient human voting or directly employ vision-language models (VLMs) as evaluators—suffering from hallucinations, biases, and poor alignment with human preferences. To bridge this gap, the authors propose K-Sort Eval, a framework that enhances VLM-human preference alignment through Bayesian posterior correction and incorporates a dynamic matching strategy to balance diversity and uncertainty in evaluations. Experiments on the K-Sort Arena dataset—a high-quality collection of human pairwise comparisons—demonstrate that the method achieves strong consistency with human judgments using fewer than 90 model inference runs, substantially improving evaluation efficiency while maintaining reliability.

Technology Category

Application Category

📝 Abstract
The rapid development of visual generative models raises the need for more scalable and human-aligned evaluation methods. While the crowdsourced Arena platforms offer human preference assessments by collecting human votes, they are costly and time-consuming, inherently limiting their scalability. Leveraging vision-language model (VLMs) as substitutes for manual judgments presents a promising solution. However, the inherent hallucinations and biases of VLMs hinder alignment with human preferences, thus compromising evaluation reliability. Additionally, the static evaluation approach lead to low efficiency. In this paper, we propose K-Sort Eval, a reliable and efficient VLM-based evaluation framework that integrates posterior correction and dynamic matching. Specifically, we curate a high-quality dataset from thousands of human votes in K-Sort Arena, with each instance containing the outputs and rankings of K models. When evaluating a new model, it undergoes (K+1)-wise free-for-all comparisons with existing models, and the VLM provide the rankings. To enhance alignment and reliability, we propose a posterior correction method, which adaptively corrects the posterior probability in Bayesian updating based on the consistency between the VLM prediction and human supervision. Moreover, we propose a dynamic matching strategy, which balances uncertainty and diversity to maximize the expected benefit of each comparison, thus ensuring more efficient evaluation. Extensive experiments show that K-Sort Eval delivers evaluation results consistent with K-Sort Arena, typically requiring fewer than 90 model runs, demonstrating both its efficiency and reliability.
Problem

Research questions and friction points this paper is trying to address.

visual generative models
human preference evaluation
vision-language models
evaluation reliability
evaluation efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

posterior correction
dynamic matching
VLM-as-a-Judge
preference evaluation
visual generation