LOVE: Benchmarking and Evaluating Text-to-Video Generation and Video-to-Text Interpretation

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AIGV evaluation heavily relies on inefficient human annotation and lacks a unified, automated benchmarking framework. Method: This paper introduces the first bidirectional, fine-grained evaluation framework supporting both text-to-video (T2V) generation and video-to-text (V2T) understanding. We construct AIGVE-60K—the largest high-quality benchmark to date—comprising 3,050 fine-grained prompts, 120K human MOS scores, and 60K question-answer pairs. We further propose LOVE, a multimodal LMM-based evaluation metric that jointly models perceptual quality, text-video alignment, and task-specific accuracy. Contribution/Results: LOVE achieves state-of-the-art performance on AIGVE-60K and demonstrates strong cross-benchmark generalization. All components—including the dataset, code, and models—are fully open-sourced to advance standardized, reproducible AIGV evaluation research.

Technology Category

Application Category

📝 Abstract
Recent advancements in large multimodal models (LMMs) have driven substantial progress in both text-to-video (T2V) generation and video-to-text (V2T) interpretation tasks. However, current AI-generated videos (AIGVs) still exhibit limitations in terms of perceptual quality and text-video alignment. Therefore, a reliable and scalable automatic model for AIGV evaluation is desirable, which heavily relies on the scale and quality of human annotations. To this end, we present AIGVE-60K, a comprehensive dataset and benchmark for AI-Generated Video Evaluation, which features (i) comprehensive tasks, encompassing 3,050 extensive prompts across 20 fine-grained task dimensions, (ii) the largest human annotations, including 120K mean-opinion scores (MOSs) and 60K question-answering (QA) pairs annotated on 58,500 videos generated from 30 T2V models, and (iii) bidirectional benchmarking and evaluating for both T2V generation and V2T interpretation capabilities. Based on AIGVE-60K, we propose LOVE, a LMM-based metric for AIGV Evaluation from multiple dimensions including perceptual preference, text-video correspondence, and task-specific accuracy in terms of both instance level and model level. Comprehensive experiments demonstrate that LOVE not only achieves state-of-the-art performance on the AIGVE-60K dataset, but also generalizes effectively to a wide range of other AIGV evaluation benchmarks. These findings highlight the significance of the AIGVE-60K dataset. Database and codes are anonymously available at https://github.com/IntMeGroup/LOVE.
Problem

Research questions and friction points this paper is trying to address.

Evaluating perceptual quality and text-video alignment in AI-generated videos
Developing a reliable automatic model for AI-generated video assessment
Creating a comprehensive dataset for benchmarking text-to-video and video-to-text tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

AIGVE-60K dataset with 60K human annotations
LOVE metric evaluates multiple video dimensions
Bidirectional benchmarking for T2V and V2T
🔎 Similar Papers
No similar papers found.
J
Jiarui Wang
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
Huiyu Duan
Huiyu Duan
Shanghai Jiao Tong University
Multimedia Signal Processing
Ziheng Jia
Ziheng Jia
Shanghai Jiaotong University / Shanghai AILab
LLM and LMM on Visual Quality Assessment
Y
Yu Zhao
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
W
Woo Yi Yang
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
Z
Zicheng Zhang
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
Zijian Chen
Zijian Chen
Shanghai Jiao Tong University | Shanghai AI Laboratory
Image/Video Quality AssessmentLarge Multi-modal Models
Juntong Wang
Juntong Wang
Shanghai Jiao Tong University
VQALMMsRL
Y
Yuke Xing
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
Guangtao Zhai
Guangtao Zhai
Professor, IEEE Fellow, Shanghai Jiao Tong University
Multimedia Signal ProcessingVisual Quality AssessmentQoEAI EvaluationDisplays
X
Xiongkuo Min
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China