Q-Save: Towards Scoring and Attribution for Generated Video Evaluation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for AI-generated videos (AIGVs) lack holistic, interpretable, and quantitative quality assessment. Method: We construct a benchmark dataset of nearly 10,000 samples and propose the first unified framework jointly modeling scalar quality scores and natural-language attribution explanations. Our approach integrates SlowFast-based multi-scale video encoding with three-dimensional fine-grained annotations—visual fidelity, motion realism, and text–video alignment—and employs a multi-stage training strategy: chain-of-thought–guided supervised fine-tuning (SFT), grouped relative policy optimization (GRPO) reinforcement learning, and iterative SFT refinement. Contribution/Results: The resulting model achieves state-of-the-art performance in quality prediction while generating human-interpretable, natural-language justifications. It significantly improves stability and alignment with human preferences, offering a transparent, trustworthy paradigm for multimodal generative evaluation.

Technology Category

Application Category

📝 Abstract
We present Q-Save, a new benchmark dataset and model for holistic and explainable evaluation of AI-generated video (AIGV) quality. The dataset contains near 10000 videos, each annotated with a scalar mean opinion score (MOS) and fine-grained attribution labels along three core dimensions: visual quality, dynamic quality, and text-video alignment. These multi-aspect annotations enable both accurate quality assessment and interpretable reasoning behind the scores. To leverage this data, we propose a unified evaluation model that jointly performs quality scoring and attribution-based explanation. The model adopts the SlowFast framework to distinguish between fast frames and slow frames - slow frames are processed with high resolution while fast frames use low resolution, balancing evaluation accuracy and computational efficiency. For training, we use data formatted in Chain-of-Thought (COT) style and employ a multi-stage strategy: we first conduct Supervised Fine-Tuning (SFT), then further enhance the model with Grouped Relative Policy Optimization (GRPO), and finally perform SFT again to improve model stability. Experimental results demonstrate that our model achieves state-of-the-art performance in video quality prediction while also providing human-aligned, interpretable justifications. Our dataset and model establish a strong foundation for explainable evaluation in generative video research, contributing to the development of multimodal generation and trustworthy AI. Code and dataset will be released upon publication.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI-generated video quality holistically and explainably
Providing interpretable scoring with visual, dynamic, and alignment attributions
Balancing evaluation accuracy with computational efficiency in video assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

SlowFast framework balances accuracy and efficiency
Multi-stage training with SFT and GRPO optimization
Joint quality scoring and attribution-based explanation model
X
Xiele Wu
Shanghai Jiao Tong University
Z
Zicheng Zhang
Shanghai Jiao Tong University
M
Mingtao Chen
Hunyuan, Tencent
Y
Yixian Liu
Hunyuan, Tencent
Y
Yiming Liu
Hunyuan, Tencent
S
Shushi Wang
Shanghai Jiao Tong University
Z
Zhichao Hu
Hunyuan, Tencent
Yuhong Liu
Yuhong Liu
Santa Clara University
Trustworthy AISecurity and PrivacyIoTBlockchainSocial network
Guangtao Zhai
Guangtao Zhai
Professor, IEEE Fellow, Shanghai Jiao Tong University
Multimedia Signal ProcessingVisual Quality AssessmentQoEAI EvaluationDisplays
X
Xiaohong Liu
Shanghai Jiao Tong University, Shanghai Innovation Institute