DiTVR: Zero-Shot Diffusion Transformer for Video Restoration

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video restoration faces challenges including strong reliance on paired training data, distorted detail generation, and temporal inconsistency. To address these, we propose an unpaired diffusion Transformer framework. Our key contributions are: (1) a trajectory-aware attention mechanism that aligns tokens along optical flow paths to enforce motion consistency; (2) a wavelet-guided sampling strategy with optical flow regularization, integrating low-frequency priors and motion cues to improve zero-shot reconstruction quality; and (3) dynamic spatiotemporal neighborhood caching coupled with sensitivity-aware layer prioritization, enhancing temporal dynamics modeling. The method significantly improves detail fidelity and inter-frame consistency across video super-resolution, denoising, and deblurring tasks. It demonstrates robustness to optical flow noise and occlusions, achieving state-of-the-art zero-shot performance on multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Video restoration aims to reconstruct high quality video sequences from low quality inputs, addressing tasks such as super resolution, denoising, and deblurring. Traditional regression based methods often produce unrealistic details and require extensive paired datasets, while recent generative diffusion models face challenges in ensuring temporal consistency. We introduce DiTVR, a zero shot video restoration framework that couples a diffusion transformer with trajectory aware attention and a wavelet guided, flow consistent sampler. Unlike prior 3D convolutional or frame wise diffusion approaches, our attention mechanism aligns tokens along optical flow trajectories, with particular emphasis on vital layers that exhibit the highest sensitivity to temporal dynamics. A spatiotemporal neighbour cache dynamically selects relevant tokens based on motion correspondences across frames. The flow guided sampler injects data consistency only into low-frequency bands, preserving high frequency priors while accelerating convergence. DiTVR establishes a new zero shot state of the art on video restoration benchmarks, demonstrating superior temporal consistency and detail preservation while remaining robust to flow noise and occlusions.
Problem

Research questions and friction points this paper is trying to address.

Ensuring temporal consistency in video restoration tasks
Overcoming limitations of traditional regression-based methods
Addressing challenges in generative diffusion models for videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion transformer with trajectory aware attention
Wavelet guided flow consistent sampler
Spatiotemporal neighbor cache for token selection
🔎 Similar Papers
No similar papers found.