SparkVSR: Interactive Video Super-Resolution via Sparse Keyframe Propagation

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an interactive video super-resolution framework that addresses the limitations of existing methods, which lack user controllability and struggle to correct artifacts. The approach introduces, for the first time, a sparse keyframe guidance mechanism: users or models can designate a few high-quality keyframes, whose information is propagated across the entire video via a two-stage latent pixel training process. This enables controllable restoration while preserving temporal consistency. The method supports flexible keyframe selection and a reference-free guidance strategy, balancing fidelity to the provided keyframes with blind restoration capability. Experiments demonstrate significant performance gains over state-of-the-art methods on multiple VSR benchmarks, with improvements of 24.6%, 21.8%, and 5.6% in CLIP-IQA, DOVER, and MUSIQ metrics, respectively. It also exhibits superior temporal coherence, perceptual quality, and generalizability to applications such as archival film restoration and style transfer.

Technology Category

Application Category

📝 Abstract
Video Super-Resolution (VSR) aims to restore high-quality video frames from low-resolution (LR) estimates, yet most existing VSR approaches behave like black boxes at inference time: users cannot reliably correct unexpected artifacts, but instead can only accept whatever the model produces. In this paper, we propose a novel interactive VSR framework dubbed SparkVSR that makes sparse keyframes a simple and expressive control signal. Specifically, users can first super-resolve or optionally a small set of keyframes using any off-the-shelf image super-resolution (ISR) model, then SparkVSR propagates the keyframe priors to the entire video sequence while remaining grounded by the original LR video motion. Concretely, we introduce a keyframe-conditioned latent-pixel two-stage training pipeline that fuses LR video latents with sparsely encoded HR keyframe latents to learn robust cross-space propagation and refine perceptual details. At inference time, SparkVSR supports flexible keyframe selection (manual specification, codec I-frame extraction, or random sampling) and a reference-free guidance mechanism that continuously balances keyframe adherence and blind restoration, ensuring robust performance even when reference keyframes are absent or imperfect. Experiments on multiple VSR benchmarks demonstrate improved temporal consistency and strong restoration quality, surpassing baselines by up to 24.6%, 21.8%, and 5.6% on CLIP-IQA, DOVER, and MUSIQ, respectively, enabling controllable, keyframe-driven video super-resolution. Moreover, we demonstrate that SparkVSR is a generic interactive, keyframe-conditioned video processing framework as it can be applied out of the box to unseen tasks such as old-film restoration and video style transfer. Our project page is available at: https://sparkvsr.github.io/
Problem

Research questions and friction points this paper is trying to address.

Video Super-Resolution
interactive control
keyframe propagation
artifact correction
user intervention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive Video Super-Resolution
Sparse Keyframe Propagation
Latent-Pixel Fusion
Reference-Free Guidance
Cross-Space Propagation
🔎 Similar Papers
No similar papers found.