🤖 AI Summary
Existing video caption evaluation methods rely on human-annotated reference captions, limiting scalability to open-domain scenarios. This paper proposes VC-Inspector—the first reference-free video caption evaluation framework grounded in factual accuracy. Our method introduces factual analysis into reference-free evaluation for the first time: leveraging a large multimodal language model (Qwen2.5-VL), we generate diverse-quality pseudo-captions and design a pseudo-caption distillation strategy to construct high-fidelity training data; we then end-to-end train a multimodal evaluator that jointly models fact detection and semantic matching. On VATEX-Eval, VC-Inspector significantly outperforms prior approaches, achieving strong correlation with human judgments (Spearman’s ρ > 0.85). Moreover, it generalizes effectively to image captioning tasks (e.g., Flickr8K), demonstrating broad applicability and robustness across modalities and domains.
📝 Abstract
Video captions offer concise snapshots of actors, objects, and actions within a video, serving as valuable assets for applications such as question answering and event localization. However, acquiring human annotations for video captions is costly or even impractical, especially when dealing with diverse video domains. Existing models trained on supervised datasets face challenges in evaluating performance across different domains due to the reliance on reference-based evaluation protocols, which necessitate ground truth captions. This assumption is unrealistic for evaluating videos in the wild. To address these limitations, we propose a reference-free evaluation framework that does not require ground truth captions, focusing on factual grounding to ensure accurate assessment of caption quality. We introduce VC-Inspector, a novel caption quality evaluator that is both reference-free and factually grounded. Utilizing large language models, we generate pseudo captions of varying quality based on supervised data, which are subsequently used to train a multimodal model (i.e., Qwen2.5-VL) as the evaluator. Our approach demonstrates superior alignment with human judgments on the VATEX-Eval dataset, outperforming existing methods. The performance also generalizes to image caption datasets, Flickr8K-Expert and Flickr8K-CF, when viewing images as 1-frame videos. Overall, VC-Inspector offers a scalable and generalizable solution for evaluating the factual accuracy of video captions, paving the way for more effective and objective assessment methodologies in diverse video domains.