Early Failure Detection and Intervention in Video Diffusion Models

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of frequent generation failures—such as poor alignment or low visual quality—in text-to-video diffusion models, which stem from sampling non-determinism and are difficult to detect early during inference, leading to substantial computational waste. The authors propose a lightweight, plug-and-play framework for early failure detection and intervention: a real-time inspection module rapidly converts latent representations into intermediate video previews, enabling efficient alignment assessment in RGB space using off-the-shelf vision-language scorers. A hierarchical early-exit mechanism is triggered only upon predicted failure. The study demonstrates, for the first time, that detectable failure signals exist in the early denoising stages and that the approach is orthogonal to existing prompt optimization and sampling guidance techniques. Evaluated on CogVideoX-5B and Wan2.1-1.3B, the method improves VBench consistency and achieves up to 2.64× inference speedup, while scaling effectively to high-load 720p/81-frame generation on Wan2.1-14B with only 39.2 ms per detection.

Technology Category

Application Category

📝 Abstract
Text-to-video (T2V) diffusion models have rapidly advanced, yet generations still occasionally fail in practice, such as low text-video alignment or low perceptual quality. Since diffusion sampling is non-deterministic, it is difficult to know during inference whether a generation will succeed or fail, incurring high computational cost due to trial-and-error regeneration. To address this, we propose an early failure detection and diagnostic intervention pipeline for latent T2V diffusion models. For detection, we design a Real-time Inspection (RI) module that converts latents into intermediate video previews, enabling the use of established text-video alignment scorers for inspection in the RGB space. The RI module completes the conversion and inspection process in just 39.2ms. This is highly efficient considering that CogVideoX-5B requires 4.3s per denoising step when generating a 480p, 49-frame video on an NVIDIA A100 GPU. Subsequently, we trigger a hierarchical and early-exit intervention pipeline only when failure is predicted. Experiments on CogVideoX-5B and Wan2.1-1.3B demonstrate consistency gains on VBench with up to 2.64 times less time overhead compared to post-hoc regeneration. Our method also generalizes to a higher-capacity setting, remaining effective on Wan2.1-14B with 720p resolution and 81-frame generation. Furthermore, our pipeline is plug-and-play and orthogonal to existing techniques, showing seamless compatibility with prompt refinement and sampling guidance methods. We also provide evidence that failure signals emerge early in the denoising process and are detectable within intermediate video previews using standard vision-language evaluators.
Problem

Research questions and friction points this paper is trying to address.

text-to-video diffusion models
generation failure
early failure detection
computational cost
non-deterministic sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

early failure detection
real-time inspection
text-to-video diffusion
diagnostic intervention
latent video generation
🔎 Similar Papers
No similar papers found.