Rethinking Diffusion Model-Based Video Super-Resolution: Leveraging Dense Guidance from Aligned Features

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address error accumulation, spatial artifacts, and the trade-off between perceptual quality and fidelity in diffusion-based video super-resolution, this paper identifies inaccurate inter-frame alignment and insufficient motion compensation as primary causes. We further observe that feature-space representation is more conducive to temporal information compensation, and that high-resolution deformations—contrary to monotonic assumptions—better preserve high-frequency details. Building on these insights, we propose the Optical Guidance Warping Module (OGWM) and the Feature-level Temporal Conditioning Module (FTCM), enabling dense, precise inter-frame alignment and robust temporal modeling directly in the feature domain. Integrated into a diffusion framework, our method achieves state-of-the-art performance on both synthetic and real-world datasets: DISTS improves by 35.82% (enhanced perceptual quality), PSNR increases by 0.20 dB (improved fidelity), and tLPIPS decreases by 30.37% (superior temporal consistency).

Technology Category

Application Category

📝 Abstract
Diffusion model (DM) based Video Super-Resolution (VSR) approaches achieve impressive perceptual quality. However, they suffer from error accumulation, spatial artifacts, and a trade-off between perceptual quality and fidelity, primarily caused by inaccurate alignment and insufficient compensation between video frames. In this paper, within the DM-based VSR pipeline, we revisit the role of alignment and compensation between adjacent video frames and reveal two crucial observations: (a) the feature domain is better suited than the pixel domain for information compensation due to its stronger spatial and temporal correlations, and (b) warping at an upscaled resolution better preserves high-frequency information, but this benefit is not necessarily monotonic. Therefore, we propose a novel Densely Guided diffusion model with Aligned Features for Video Super-Resolution (DGAF-VSR), with an Optical Guided Warping Module (OGWM) to maintain high-frequency details in the aligned features and a Feature-wise Temporal Condition Module (FTCM) to deliver dense guidance in the feature domain. Extensive experiments on synthetic and real-world datasets demonstrate that DGAF-VSR surpasses state-of-the-art methods in key aspects of VSR, including perceptual quality (35.82% DISTS reduction), fidelity (0.20 dB PSNR gain), and temporal consistency (30.37% tLPIPS reduction).
Problem

Research questions and friction points this paper is trying to address.

Addressing error accumulation in diffusion-based video super-resolution
Improving alignment and compensation between adjacent video frames
Enhancing perceptual quality and fidelity while reducing artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optical Guided Warping Module preserves high-frequency details
Feature-wise Temporal Condition Module provides dense guidance
Aligned features in diffusion model enhance video super-resolution
🔎 Similar Papers
No similar papers found.
J
Jingyi Xu
Department of Electronic and Information Engineering, Beihang University
M
Meisong Zheng
Department of Tao Technology, Alibaba Group
Y
Ying Chen
Department of Tao Technology, Alibaba Group
Minglang Qiao
Minglang Qiao
Beihang University
video perceptionlow-level visionquality enhancementvideo coding
X
Xin Deng
Department of Electronic and Information Engineering, Beihang University
Mai Xu
Mai Xu
Beihang Univeristy, Tsinghua Univeristy, Imperial College London