🤖 AI Summary
Current text-to-video generation models (e.g., Sora, Gen-3) are predominantly evaluated using metrics emphasizing visual quality and motion smoothness, while neglecting temporal fidelity and text-video alignment—critical requirements for safety-critical applications. To address this gap, we propose NeuS-V, the first quantitative evaluation framework grounded in neural-symbolic formal verification. Our method comprises three key components: (1) automatic compilation of natural language prompts into temporal logic (TL) specifications; (2) symbolic modeling of videos as finite-state automata; and (3) rigorous formal verification via model checking. To support evaluation of temporal complexity, we construct the first synthetic prompt dataset explicitly designed for varying temporal intricacy. Experiments demonstrate that NeuS-V achieves over fivefold higher correlation with human judgment compared to existing metrics and, for the first time, systematically exposes severe temporal reasoning failures of state-of-the-art models under temporally complex prompts.
📝 Abstract
Recent advancements in text-to-video models such as Sora, Gen-3, MovieGen, and CogVideoX are pushing the boundaries of synthetic video generation, with adoption seen in fields like robotics, autonomous driving, and entertainment. As these models become prevalent, various metrics and benchmarks have emerged to evaluate the quality of the generated videos. However, these metrics emphasize visual quality and smoothness, neglecting temporal fidelity and text-to-video alignment, which are crucial for safety-critical applications. To address this gap, we introduce NeuS-V, a novel synthetic video evaluation metric that rigorously assesses text-to-video alignment using neuro-symbolic formal verification techniques. Our approach first converts the prompt into a formally defined Temporal Logic (TL) specification and translates the generated video into an automaton representation. Then, it evaluates the text-to-video alignment by formally checking the video automaton against the TL specification. Furthermore, we present a dataset of temporally extended prompts to evaluate state-of-the-art video generation models against our benchmark. We find that NeuS-V demonstrates a higher correlation by over 5x with human evaluations when compared to existing metrics. Our evaluation further reveals that current video generation models perform poorly on these temporally complex prompts, highlighting the need for future work in improving text-to-video generation capabilities.