🤖 AI Summary
This work addresses the severe quality degradation in classic Chinese opera videos, which stems from limitations of early recording equipment and long-term storage deterioration. Existing real-world video super-resolution methods struggle to accurately model such complex degradations and often lack high-level semantic guidance, leading to blurry or distorted reconstructions. To overcome these challenges, we propose TextOVSR, a novel framework that introduces a dual-path text prompting mechanism: negative prompts specify degradation types to constrain the solution space, while positive prompts describe semantic content to guide faithful detail recovery. Our approach integrates a Text-Enhanced Discriminator (TED) and a Degradation-Resilient Fusion (DRF) module to effectively fuse cross-modal information and suppress degradation artifacts. Evaluated on the OperaLQ benchmark, TextOVSR outperforms state-of-the-art methods in both subjective visual quality and objective metrics.
📝 Abstract
Many classic opera videos exhibit poor visual quality due to the limitations of early filming equipment and long-term degradation during storage. Although real-world video super-resolution (RWVSR) has achieved significant advances in recent years, directly applying existing methods to degraded opera videos remains challenging. The difficulties are twofold. First, accurately modeling real-world degradations is complex: simplistic combinations of classical degradation kernels fail to capture the authentic noise distribution, while methods that extract real noise patches from external datasets are prone to style mismatches that introduce visual artifacts. Second, current RWVSR methods, which rely solely on degraded image features, struggle to reconstruct realistic and detailed textures due to a lack of high-level semantic guidance. To address these issues, we propose a Text-guided Dual-Branch Opera Video Super-Resolution (TextOVSR) network, which introduces two types of textual prompts to guide the super-resolution process. Specifically, degradation-descriptive text, derived from the degradation process, is incorporated into the negative branch to constrain the solution space. Simultaneously, content-descriptive text is incorporated into a positive branch and our proposed Text-Enhanced Discriminator (TED) to provide semantic guidance for enhanced texture reconstruction. Furthermore, we design a Degradation-Robust Feature Fusion (DRF) module to facilitate cross-modal feature fusion while suppressing degradation interference. Experiments on our OperaLQ benchmark show that TextOVSR outperforms state-of-the-art methods both qualitatively and quantitatively. The code is available at https://github.com/ChangHua0/TextOVSR.