UniVid: Pyramid Diffusion Model for High Quality Video Generation

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a unified dual-modality video generation framework that effectively bridges the gap between text-to-video (T2V) and image-to-video (I2V) paradigms, which existing methods struggle to reconcile. Built upon a pre-trained text-to-image diffusion architecture, the model introduces a temporal pyramid cross-frame spatiotemporal attention mechanism and a dual-stream cross-attention module, augmented with a re-weightable attention strategy to flexibly fuse textual semantics and structural image information. The framework seamlessly supports T2V, I2V, and joint (T+I)2V generation tasks, enabling smooth interpolation between single- and dual-modality control. Experimental results demonstrate significant improvements over current approaches in both temporal consistency and overall generation quality.

Technology Category

Application Category

📝 Abstract
Diffusion-based text-to-video generation (T2V) or image-to-video (I2V) generation have emerged as a prominent research focus. However, there exists a challenge in integrating the two generative paradigms into a unified model. In this paper, we present a unified video generation model (UniVid) with hybrid conditions of the text prompt and reference image. Given these two available controls, our model can extract objects' appearance and their motion descriptions from textual prompts, while obtaining texture details and structural information from image clues to guide the video generation process. Specifically, we scale up the pre-trained text-to-image diffusion model for generating temporally coherent frames via introducing our temporal-pyramid cross-frame spatial-temporal attention modules and convolutions. To support bimodal control, we introduce a dual-stream cross-attention mechanism, whose attention scores can be freely re-weighted for interpolation of between single and two modalities controls during inference. Extensive experiments showcase that our UniVid achieves superior temporal coherence on T2V, I2V and (T+I)2V tasks.
Problem

Research questions and friction points this paper is trying to address.

text-to-video generation
image-to-video generation
unified video generation
multimodal control
temporal coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

pyramid diffusion
temporal coherence
dual-stream cross-attention
unified video generation
hybrid conditioning
🔎 Similar Papers
No similar papers found.
X
Xinyu Xiao
B
Binbin Yang
T
Tingtian Li
Y
Yipeng Yu
Sen Lei
Sen Lei
Southwest Jiaotong University
computer visiondeep learningremote sensing