🤖 AI Summary
AIGV evaluation heavily relies on inefficient human annotation and lacks a unified, automated benchmarking framework. Method: This paper introduces the first bidirectional, fine-grained evaluation framework supporting both text-to-video (T2V) generation and video-to-text (V2T) understanding. We construct AIGVE-60K—the largest high-quality benchmark to date—comprising 3,050 fine-grained prompts, 120K human MOS scores, and 60K question-answer pairs. We further propose LOVE, a multimodal LMM-based evaluation metric that jointly models perceptual quality, text-video alignment, and task-specific accuracy. Contribution/Results: LOVE achieves state-of-the-art performance on AIGVE-60K and demonstrates strong cross-benchmark generalization. All components—including the dataset, code, and models—are fully open-sourced to advance standardized, reproducible AIGV evaluation research.
📝 Abstract
Recent advancements in large multimodal models (LMMs) have driven substantial progress in both text-to-video (T2V) generation and video-to-text (V2T) interpretation tasks. However, current AI-generated videos (AIGVs) still exhibit limitations in terms of perceptual quality and text-video alignment. Therefore, a reliable and scalable automatic model for AIGV evaluation is desirable, which heavily relies on the scale and quality of human annotations. To this end, we present AIGVE-60K, a comprehensive dataset and benchmark for AI-Generated Video Evaluation, which features (i) comprehensive tasks, encompassing 3,050 extensive prompts across 20 fine-grained task dimensions, (ii) the largest human annotations, including 120K mean-opinion scores (MOSs) and 60K question-answering (QA) pairs annotated on 58,500 videos generated from 30 T2V models, and (iii) bidirectional benchmarking and evaluating for both T2V generation and V2T interpretation capabilities. Based on AIGVE-60K, we propose LOVE, a LMM-based metric for AIGV Evaluation from multiple dimensions including perceptual preference, text-video correspondence, and task-specific accuracy in terms of both instance level and model level. Comprehensive experiments demonstrate that LOVE not only achieves state-of-the-art performance on the AIGVE-60K dataset, but also generalizes effectively to a wide range of other AIGV evaluation benchmarks. These findings highlight the significance of the AIGVE-60K dataset. Database and codes are anonymously available at https://github.com/IntMeGroup/LOVE.