๐ค AI Summary
This work addresses the lack of theoretical foundations for modeling spatiotemporal dependencies in sequence data using diffusion Transformers. We propose the first verifiable theoretical framework, interpreting the Transformer as an algorithmic approximation to a diffusion process. Specifically, we establish the first provable guarantees on fractional-function approximation and distribution estimation for Gaussian process data exhibiting multiple covariance decay patternsโranging from exponential to polynomial and logarithmic decay. Methodologically, our approach integrates diffusion modeling, Gaussian process theory, and rigorous attention mechanism analysis. Numerical experiments confirm that attention layers effectively capture spatiotemporal dependencies, and learning efficiency is demonstrably governed by the underlying covariance decay regime. Our results provide both interpretability and verifiability guarantees for large-scale video generation models such as Sora, bridging a critical gap between empirical success and theoretical understanding in diffusion-based sequence modeling.
๐ Abstract
Diffusion Transformer, the backbone of Sora for video generation, successfully scales the capacity of diffusion models, pioneering new avenues for high-fidelity sequential data generation. Unlike static data such as images, sequential data consists of consecutive data frames indexed by time, exhibiting rich spatial and temporal dependencies. These dependencies represent the underlying dynamic model and are critical to validate the generated data. In this paper, we make the first theoretical step towards bridging diffusion transformers for capturing spatial-temporal dependencies. Specifically, we establish score approximation and distribution estimation guarantees of diffusion transformers for learning Gaussian process data with covariance functions of various decay patterns. We highlight how the spatial-temporal dependencies are captured and affect learning efficiency. Our study proposes a novel transformer approximation theory, where the transformer acts to unroll an algorithm. We support our theoretical results by numerical experiments, providing strong evidence that spatial-temporal dependencies are captured within attention layers, aligning with our approximation theory.