Neuro-Symbolic Evaluation of Text-to-Video Models using Formal Verification

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Current text-to-video generation models (e.g., Sora, Gen-3) are predominantly evaluated using metrics emphasizing visual quality and motion smoothness, while neglecting temporal fidelity and text-video alignment—critical requirements for safety-critical applications. To address this gap, we propose NeuS-V, the first quantitative evaluation framework grounded in neural-symbolic formal verification. Our method comprises three key components: (1) automatic compilation of natural language prompts into temporal logic (TL) specifications; (2) symbolic modeling of videos as finite-state automata; and (3) rigorous formal verification via model checking. To support evaluation of temporal complexity, we construct the first synthetic prompt dataset explicitly designed for varying temporal intricacy. Experiments demonstrate that NeuS-V achieves over fivefold higher correlation with human judgment compared to existing metrics and, for the first time, systematically exposes severe temporal reasoning failures of state-of-the-art models under temporally complex prompts.

Technology Category

Application Category

📝 Abstract
Recent advancements in text-to-video models such as Sora, Gen-3, MovieGen, and CogVideoX are pushing the boundaries of synthetic video generation, with adoption seen in fields like robotics, autonomous driving, and entertainment. As these models become prevalent, various metrics and benchmarks have emerged to evaluate the quality of the generated videos. However, these metrics emphasize visual quality and smoothness, neglecting temporal fidelity and text-to-video alignment, which are crucial for safety-critical applications. To address this gap, we introduce NeuS-V, a novel synthetic video evaluation metric that rigorously assesses text-to-video alignment using neuro-symbolic formal verification techniques. Our approach first converts the prompt into a formally defined Temporal Logic (TL) specification and translates the generated video into an automaton representation. Then, it evaluates the text-to-video alignment by formally checking the video automaton against the TL specification. Furthermore, we present a dataset of temporally extended prompts to evaluate state-of-the-art video generation models against our benchmark. We find that NeuS-V demonstrates a higher correlation by over 5x with human evaluations when compared to existing metrics. Our evaluation further reveals that current video generation models perform poorly on these temporally complex prompts, highlighting the need for future work in improving text-to-video generation capabilities.
Problem

Research questions and friction points this paper is trying to address.

Assessing text-to-video alignment in synthetic video models
Evaluating temporal fidelity using neuro-symbolic formal verification
Addressing gaps in current video generation evaluation metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic formal verification for video evaluation
Temporal Logic specification for prompt alignment
Automaton representation for video analysis
🔎 Similar Papers
No similar papers found.
S
S. P. Sharan
The University of Texas at Austin, United States
M
Minkyu Choi
The University of Texas at Austin, United States
S
Sahil Shah
The University of Texas at Austin, United States
Harsh Goel
Harsh Goel
University of Texas at Austin
Reinforcement LearningRoboticsGenerative AINeurosymbolic AI
Mohammad Omama
Mohammad Omama
The University of Texas at Austin
RoboticsMachine Learning
S
Sandeep Chinchali
The University of Texas at Austin, United States