🤖 AI Summary
Existing cascaded video super-resolution (VSR) methods rely solely on text conditioning, limiting fidelity and cross-modal consistency in multimodal generation. To address this, we propose the first unified cascaded framework supporting hybrid conditioning—text, image, and video—built upon a latent video diffusion model. Our approach introduces a multimodal conditional injection mechanism, a stage-wise collaborative training strategy, and a cross-modal data fusion method. It is the first to enable multimodal-guided 4K VSR generation, overcoming the fundamental limitations of single-text-conditioned approaches. Extensive experiments demonstrate significant improvements over state-of-the-art methods in generation quality, temporal coherence, and multimodal alignment. Moreover, our framework seamlessly integrates with mainstream foundation models, enabling high-fidelity 4K video synthesis.
📝 Abstract
Cascaded video super-resolution has emerged as a promising technique for decoupling the computational burden associated with generating high-resolution videos using large foundation models. Existing studies, however, are largely confined to text-to-video tasks and fail to leverage additional generative conditions beyond text, which are crucial for ensuring fidelity in multi-modal video generation. We address this limitation by presenting UniMMVSR, the first unified generative video super-resolution framework to incorporate hybrid-modal conditions, including text, images, and videos. We conduct a comprehensive exploration of condition injection strategies, training schemes, and data mixture techniques within a latent video diffusion model. A key challenge was designing distinct data construction and condition utilization methods to enable the model to precisely utilize all condition types, given their varied correlations with the target video. Our experiments demonstrate that UniMMVSR significantly outperforms existing methods, producing videos with superior detail and a higher degree of conformity to multi-modal conditions. We also validate the feasibility of combining UniMMVSR with a base model to achieve multi-modal guided generation of 4K video, a feat previously unattainable with existing techniques.