UniVBench: Towards Unified Evaluation for Video Foundation Models

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks for video foundation models are fragmented, making it difficult to comprehensively assess their unified capabilities across diverse tasks such as understanding, generation, and editing. To address this limitation, this work proposes UniVBench—the first unified evaluation framework tailored for video foundation日消息 models—encompassing four core capabilities: video understanding, generation, editing, and a newly introduced video reconstruction task. Built upon 200 high-quality, multi-shot, human-created videos along with associated instructions and reference images, UniVBench introduces UniV-Eval, an end-to-end standardized agent-based evaluation system that enables consistent cross-task assessment. Human validation demonstrates strong alignment between UniV-Eval scores and human judgments, establishing a fair, scalable, and reproducible benchmark for evaluating intelligent video models.

Technology Category

Application Category

📝 Abstract
Video foundation models aim to integrate video understanding, generation, editing, and instruction following within a single framework, making them a central direction for next-generation multimodal systems. However, existing evaluation benchmarks remain fragmented and limited in scope, as they each target a single task, rely on task-specific metrics, and typically use short or simple video clips. As a result, they do not capture the unified capabilities that these models are designed to deliver. To address this gap, we introduce UniVBench, a benchmark purpose-built for evaluating video foundation models across four core abilities: video understanding, video generation, video editing, and a newly proposed task, video reconstruction, which assesses how faithfully a model can reproduce video content it has encountered. Our benchmark substantially expands the complexity of evaluation by incorporating 200 high-quality, diverse and multi-shot videos, each paired with detailed captions, multi-format editing instructions, and reference images. All videos are human-created and carefully validated, offering richer cinematic information than prior benchmarks. In addition, we develop a unified agentic evaluation system (UniV-Eval) that standardizes prompting, instruction parsing, and scoring across all tasks, enabling fair, scalable, and reproducible comparisons of unified video models. By grounding evaluation in instruction-based multi-shot video tasks, UniVBench provides the first framework for measuring the integrated capabilities that video foundation models aim to achieve. Extensive human annotations ensure our evaluation aligns with human judgment, enabling rigorous assessment and accelerating progress toward robust video intelligence.
Problem

Research questions and friction points this paper is trying to address.

video foundation models
unified evaluation
evaluation benchmark
multimodal systems
video understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Foundation Models
Unified Evaluation
Multi-shot Video Benchmark
Video Reconstruction
Agentic Evaluation System
🔎 Similar Papers
No similar papers found.