🤖 AI Summary
Quantitative evaluation of subject consistency, appearance naturalness, and identity fidelity remains challenging in subject-to-video (S2V) generation. Method: This paper introduces OpenS2V-Eval, the first systematic infrastructure for S2V evaluation, comprising (i) a fine-grained benchmark, (ii) three automated metrics—NexusScore (for cross-video subject coherence), NaturalScore (for appearance realism), and GmeScore (for identity preservation), (iii) OpenS2V-5M, the first open-source, million-scale, high-quality triplet dataset (5M 720p subject-text-video samples) enabling cross-video subject association and multi-view synthesis, and (iv) a novel evaluation framework integrating cross-video subject segmentation pairing, GPT-Image-1–driven multi-view synthesis, and human preference alignment. Contribution/Results: We comprehensively evaluate 16 state-of-the-art S2V models, significantly improving training reliability and evaluation reproducibility.
📝 Abstract
Subject-to-Video (S2V) generation aims to create videos that faithfully incorporate reference content, providing enhanced flexibility in the production of videos. To establish the infrastructure for S2V generation, we propose OpenS2V-Nexus, consisting of (i) OpenS2V-Eval, a fine-grained benchmark, and (ii) OpenS2V-5M, a million-scale dataset. In contrast to existing S2V benchmarks inherited from VBench that focus on global and coarse-grained assessment of generated videos, OpenS2V-Eval focuses on the model's ability to generate subject-consistent videos with natural subject appearance and identity fidelity. For these purposes, OpenS2V-Eval introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, to accurately align human preferences with S2V benchmarks, we propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive evaluation of 16 representative S2V models, highlighting their strengths and weaknesses across different content. Moreover, we create the first open-source large-scale S2V generation dataset OpenS2V-5M, which consists of five million high-quality 720P subject-text-video triples. Specifically, we ensure subject-information diversity in our dataset by (1) segmenting subjects and building pairing information via cross-video associations and (2) prompting GPT-Image-1 on raw frames to synthesize multi-view representations. Through OpenS2V-Nexus, we deliver a robust infrastructure to accelerate future S2V generation research.