🤖 AI Summary
This work addresses the computational inefficiency inherent in existing causal diffusion models, where temporal causal reasoning is tightly coupled with the multi-step denoising process, leading to redundant computations. The study reveals—for the first time—that causal reasoning and denoising in video diffusion can be effectively decoupled. To this end, the authors propose a novel architecture that employs a causal Transformer encoder to perform temporal modeling once per frame, followed by a lightweight diffusion decoder for per-frame rendering. This design significantly improves throughput and reduces per-frame latency across multiple synthetic and real-world video benchmarks, while maintaining or even surpassing the generation quality of baseline methods.
📝 Abstract
Causality -- referring to temporal, uni-directional cause-effect relationships between components -- underlies many complex generative processes, including videos, language, and robot trajectories. Current causal diffusion models entangle temporal reasoning with iterative denoising, applying causal attention across all layers, at every denoising step, and over the entire context. In this paper, we show that the causal reasoning in these models is separable from the multi-step denoising process. Through systematic probing of autoregressive video diffusers, we uncover two key regularities: (1) early layers produce highly similar features across denoising steps, indicating redundant computation along the diffusion trajectory; and (2) deeper layers exhibit sparse cross-frame attention and primarily perform intra-frame rendering. Motivated by these findings, we introduce Separable Causal Diffusion (SCD), a new architecture that explicitly decouples once-per-frame temporal reasoning, via a causal transformer encoder, from multi-step frame-wise rendering, via a lightweight diffusion decoder. Extensive experiments on both pretraining and post-training tasks across synthetic and real benchmarks show that SCD significantly improves throughput and per-frame latency while matching or surpassing the generation quality of strong causal diffusion baselines.