VidCapBench: A Comprehensive Benchmark of Video Captioning for Controllable Text-to-Video Generation

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video captioning evaluation metrics are severely misaligned with controllable text-to-video (T2V) generation quality assessment, hindering the improvement of video-text alignment in T2V model training. To address this, we introduce VidCapBench—the first benchmark explicitly designed for controllable T2V tasks—featuring a novel decoupled evaluation framework that partitions critical attributes (aesthetics, content fidelity, motion coherence, and physical plausibility) into automatically evaluable and human-evaluable subsets, enabling format-agnostic agile development and rigorous validation. Our pipeline integrates expert-model pre-annotation with human refinement, multi-dimensional attribute modeling, hierarchical evaluation, and cross-model robustness verification, substantially enhancing assessment stability and comprehensiveness. Experiments demonstrate that VidCapBench outperforms existing metrics across multiple state-of-the-art video captioning models and exhibits strong positive correlation (p < 0.01) with mainstream T2V generation quality. The benchmark is open-sourced and widely adopted by the research community.

Technology Category

Application Category

📝 Abstract
The training of controllable text-to-video (T2V) models relies heavily on the alignment between videos and captions, yet little existing research connects video caption evaluation with T2V generation assessment. This paper introduces VidCapBench, a video caption evaluation scheme specifically designed for T2V generation, agnostic to any particular caption format. VidCapBench employs a data annotation pipeline, combining expert model labeling and human refinement, to associate each collected video with key information spanning video aesthetics, content, motion, and physical laws. VidCapBench then partitions these key information attributes into automatically assessable and manually assessable subsets, catering to both the rapid evaluation needs of agile development and the accuracy requirements of thorough validation. By evaluating numerous state-of-the-art captioning models, we demonstrate the superior stability and comprehensiveness of VidCapBench compared to existing video captioning evaluation approaches. Verification with off-the-shelf T2V models reveals a significant positive correlation between scores on VidCapBench and the T2V quality evaluation metrics, indicating that VidCapBench can provide valuable guidance for training T2V models. The project is available at https://github.com/VidCapBench/VidCapBench.
Problem

Research questions and friction points this paper is trying to address.

Improves video caption evaluation for text-to-video models.
Introduces VidCapBench for diverse video attribute assessment.
Links caption scores with text-to-video quality metrics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data annotation pipeline integration
Automated and manual assessment partitioning
Correlation with T2V quality metrics
🔎 Similar Papers
X
Xinlong Chen
1New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA); 2School of Artificial Intelligence, University of Chinese Academy of Sciences
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
C
Chongling Rao
3Kuaishou Technology
Yushuo Guan
Yushuo Guan
Peking University
VLMDiffusion Model
J
Jiaheng Liu
4Nanjing University
F
Fuzheng Zhang
3Kuaishou Technology
Chengru Song
Chengru Song
Unknown affiliation
Q
Qiang Liu
1New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA); 2School of Artificial Intelligence, University of Chinese Academy of Sciences
D
Di Zhang
3Kuaishou Technology
Tieniu Tan
Tieniu Tan
Institute of Automation, Chinese Academy of Sciences