GriDiT: Factorized Grid-Based Diffusion for Efficient Long Image Sequence Generation

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods model image sequences as high-dimensional spatiotemporal tensors, incurring prohibitive computational cost, limited sequence length, and entangled modeling of fine details and motion dynamics. This work proposes a two-stage generative paradigm: first generating coarse-grained 3D sequences on a low-resolution grid, then refining them frame-by-frame via super-resolution. Crucially, we extend the 2D Diffusion Transformer (DiT) to 3D sequence generation without architectural modification, thereby decoupling spatiotemporal dynamics modeling from detail synthesis. We further introduce a self-attention–driven inter-frame correlation learning mechanism, enabling length-agnostic and cross-domain generalizable generation. Experiments demonstrate that our approach surpasses state-of-the-art methods in both visual quality and temporal consistency, achieves over 2× faster inference, and significantly reduces training data requirements and computational cost.

Technology Category

Application Category

📝 Abstract
Modern deep learning methods typically treat image sequences as large tensors of sequentially stacked frames. However, is this straightforward representation ideal given the current state-of-the-art (SoTA)? In this work, we address this question in the context of generative models and aim to devise a more effective way of modeling image sequence data. Observing the inefficiencies and bottlenecks of current SoTA image sequence generation methods, we showcase that rather than working with large tensors, we can improve the generation process by factorizing it into first generating the coarse sequence at low resolution and then refining the individual frames at high resolution. We train a generative model solely on grid images comprising subsampled frames. Yet, we learn to generate image sequences, using the strong self-attention mechanism of the Diffusion Transformer (DiT) to capture correlations between frames. In effect, our formulation extends a 2D image generator to operate as a low-resolution 3D image-sequence generator without introducing any architectural modifications. Subsequently, we super-resolve each frame individually to add the sequence-independent high-resolution details. This approach offers several advantages and can overcome key limitations of the SoTA in this domain. Compared to existing image sequence generation models, our method achieves superior synthesis quality and improved coherence across sequences. It also delivers high-fidelity generation of arbitrary-length sequences and increased efficiency in inference time and training data usage. Furthermore, our straightforward formulation enables our method to generalize effectively across diverse data domains, which typically require additional priors and supervision to model in a generative context. Our method consistently outperforms SoTA in quality and inference speed (at least twice-as-fast) across datasets.
Problem

Research questions and friction points this paper is trying to address.

Efficient long image sequence generation
Improving coherence across generated sequences
Generalizing across diverse data domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factorizes generation into low-res coarse sequence and high-res frame refinement
Uses Diffusion Transformer self-attention to capture frame correlations
Extends 2D image generator to low-res 3D sequence without architectural changes
🔎 Similar Papers
No similar papers found.