π€ AI Summary
Multimodal large language models (MLLMs) exhibit evaluation bias, overconfidence, and inconsistent cross-domain performance when assessing text-to-image (TTI) generation quality. To address these issues, we propose Bayesian Prompt Integration (BPI), a novel method operating in the joint vision-language embedding space. BPI introduces an image-clustering-guided dynamic weight allocation mechanism that enables adaptive prompt weighting based on visual features, thereby improving both assessment accuracy and uncertainty calibration. Crucially, BPI requires no model fine-tuningβonly prompt engineering and Bayesian ensemble optimization of the decision process. On the HPSv2 and MJBench benchmarks, BPI consistently outperforms existing TTI evaluators, achieving higher agreement with human judgments and reducing calibration error by up to 32.7%. This work establishes a new paradigm for trustworthy, interpretable, and well-calibrated TTI evaluation.
π Abstract
Multimodal large language models (MLLMs) are increasingly used to evaluate text-to-image (TTI) generation systems, providing automated judgments based on visual and textual context. However, these "judge" models often suffer from biases, overconfidence, and inconsistent performance across diverse image domains. While prompt ensembling has shown promise for mitigating these issues in unimodal, text-only settings, our experiments reveal that standard ensembling methods fail to generalize effectively for TTI tasks. To address these limitations, we propose a new multimodal-aware method called Multimodal Mixture-of-Bayesian Prompt Ensembles (MMB). Our method uses a Bayesian prompt ensemble approach augmented by image clustering, allowing the judge to dynamically assign prompt weights based on the visual characteristics of each sample. We show that MMB improves accuracy in pairwise preference judgments and greatly enhances calibration, making it easier to gauge the judge's true uncertainty. In evaluations on two TTI benchmarks, HPSv2 and MJBench, MMB outperforms existing baselines in alignment with human annotations and calibration across varied image content. Our findings highlight the importance of multimodal-specific strategies for judge calibration and suggest a promising path forward for reliable large-scale TTI evaluation.