🤖 AI Summary
Current text-to-video (T2V) models frequently produce subtle, localized errors, yet mainstream evaluation paradigms lack fine-grained spatiotemporal localization and semantic attribution capabilities for such errors.
Method: We introduce Spotlight—a novel fine-grained error localization and attribution framework for T2V generation—accompanied by a benchmark dataset comprising 600 videos and over 1,600 human-annotated errors. We systematically define six categories of local errors (e.g., motion distortion, physical implausibility, prompt deviation) and analyze their temporal distribution patterns. Leveraging outputs from state-of-the-art models (e.g., Veo3, Seedance, LTX-2), we enhance visual-language model (VLM)-based error detection via inference-time optimization strategies.
Contribution/Results: Our approach doubles VLM error identification accuracy over baselines. Spotlight establishes a new evaluation paradigm for T2V, enabling fine-grained reward modeling and model diagnostics.
📝 Abstract
Current text-to-video models (T2V) can generate high-quality, temporally coherent, and visually realistic videos. Nonetheless, errors still often occur, and are more nuanced and local compared to the previous generation of T2V models. While current evaluation paradigms assess video models across diverse dimensions, they typically evaluate videos holistically without identifying when specific errors occur or describing their nature. We address this gap by introducing Spotlight, a novel task aimed at localizing and explaining video-generation errors. We generate 600 videos using 200 diverse textual prompts and three state-of-the-art video generators (Veo 3, Seedance, and LTX-2), and annotate over 1600 fine-grained errors across six types, including motion, physics, and prompt adherence. We observe that adherence and physics errors are predominant and persist across longer segments, whereas appearance-disappearance and body pose errors manifest in shorter segments. We then evaluate current VLMs on Spotlight and find that VLMs lag significantly behind humans in error identification and localization in videos. We propose inference-time strategies to probe the limits of current VLMs on our task, improving performance by nearly 2x. Our task paves a way forward to building fine-grained evaluation tools and more sophisticated reward models for video generators.