STAR: Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models pretrained on static images suffer from temporal inconsistency in real-world video super-resolution (VSR), while text-to-video (T2V) models introduce fidelity degradation and exacerbated degradation artifacts. Method: This work pioneers the integration of large-scale T2V models—specifically CogVideoX-5B—into VSR. We propose a Local Information Enhancement Module (LIEM) to suppress complex degradation artifacts and design a Dynamic Frequency Loss (DF Loss) to jointly optimize multi-scale spatial details and temporal consistency. Our approach synergistically combines T2V priors, local attention mechanisms, and frequency-domain-aware optimization. Contribution/Results: The method achieves state-of-the-art performance on both synthetic and real-world benchmarks, significantly improving visual quality, structural fidelity, and inter-frame stability without requiring additional video-specific pretraining or fine-tuning.

Technology Category

Application Category

📝 Abstract
Image diffusion models have been adapted for real-world video super-resolution to tackle over-smoothing issues in GAN-based methods. However, these models struggle to maintain temporal consistency, as they are trained on static images, limiting their ability to capture temporal dynamics effectively. Integrating text-to-video (T2V) models into video super-resolution for improved temporal modeling is straightforward. However, two key challenges remain: artifacts introduced by complex degradations in real-world scenarios, and compromised fidelity due to the strong generative capacity of powerful T2V models ( extit{e.g.}, CogVideoX-5B). To enhance the spatio-temporal quality of restored videos, we introduce extbf{~ ame} ( extbf{S}patial- extbf{T}emporal extbf{A}ugmentation with T2V models for extbf{R}eal-world video super-resolution), a novel approach that leverages T2V models for real-world video super-resolution, achieving realistic spatial details and robust temporal consistency. Specifically, we introduce a Local Information Enhancement Module (LIEM) before the global attention block to enrich local details and mitigate degradation artifacts. Moreover, we propose a Dynamic Frequency (DF) Loss to reinforce fidelity, guiding the model to focus on different frequency components across diffusion steps. Extensive experiments demonstrate extbf{~ ame}~outperforms state-of-the-art methods on both synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Video Super-Resolution
Temporal Consistency
GAN Over-smoothing
Innovation

Methods, ideas, or system contributions that make the work stand out.

T2V models
Local Information Enhancement Module
Dynamic Frequency Loss
🔎 Similar Papers
No similar papers found.
R
Rui Xie
Nanjing University, ByteDance
Yinhong Liu
Yinhong Liu
Modern & Medieval Languages and Linguistics, University of Cambridge
NLPNLG
P
Penghao Zhou
ByteDance
C
Chen Zhao
Nanjing University
J
Jun Zhou
Southwest University
K
Kai Zhang
Nanjing University
Z
Zhenyu Zhang
Nanjing University
J
Jian Yang
Nanjing University
Zhenheng Yang
Zhenheng Yang
TikTok
Computer VisionMachine LearningDeep Learning
Y
Ying Tai
Nanjing University