🤖 AI Summary
Diffusion models pretrained on static images suffer from temporal inconsistency in real-world video super-resolution (VSR), while text-to-video (T2V) models introduce fidelity degradation and exacerbated degradation artifacts. Method: This work pioneers the integration of large-scale T2V models—specifically CogVideoX-5B—into VSR. We propose a Local Information Enhancement Module (LIEM) to suppress complex degradation artifacts and design a Dynamic Frequency Loss (DF Loss) to jointly optimize multi-scale spatial details and temporal consistency. Our approach synergistically combines T2V priors, local attention mechanisms, and frequency-domain-aware optimization. Contribution/Results: The method achieves state-of-the-art performance on both synthetic and real-world benchmarks, significantly improving visual quality, structural fidelity, and inter-frame stability without requiring additional video-specific pretraining or fine-tuning.
📝 Abstract
Image diffusion models have been adapted for real-world video super-resolution to tackle over-smoothing issues in GAN-based methods. However, these models struggle to maintain temporal consistency, as they are trained on static images, limiting their ability to capture temporal dynamics effectively. Integrating text-to-video (T2V) models into video super-resolution for improved temporal modeling is straightforward. However, two key challenges remain: artifacts introduced by complex degradations in real-world scenarios, and compromised fidelity due to the strong generative capacity of powerful T2V models ( extit{e.g.}, CogVideoX-5B). To enhance the spatio-temporal quality of restored videos, we introduce extbf{~
ame} ( extbf{S}patial- extbf{T}emporal extbf{A}ugmentation with T2V models for extbf{R}eal-world video super-resolution), a novel approach that leverages T2V models for real-world video super-resolution, achieving realistic spatial details and robust temporal consistency. Specifically, we introduce a Local Information Enhancement Module (LIEM) before the global attention block to enrich local details and mitigate degradation artifacts. Moreover, we propose a Dynamic Frequency (DF) Loss to reinforce fidelity, guiding the model to focus on different frequency components across diffusion steps. Extensive experiments demonstrate extbf{~
ame}~outperforms state-of-the-art methods on both synthetic and real-world datasets.