🤖 AI Summary
This work addresses the challenges of low contrast and missing high-frequency details in low-light video super-resolution by proposing RetinexEVSR, a novel framework that uniquely integrates event camera signals with Retinex-based priors. The method introduces an illumination-guided event enhancement module and an event-guided reflectance enhancement module to enable bidirectional cross-modal fusion, effectively leveraging complementary information between noisy events and degraded RGB frames while preserving fine details. By employing Retinex decomposition to generate illumination maps that guide multi-scale feature enhancement, the approach fully exploits the high dynamic range advantage of event cameras. Evaluated on three benchmarks, RetinexEVSR achieves state-of-the-art performance, surpassing existing event-based methods by 2.95 dB on the SDSD dataset and accelerating inference speed by 65%.
📝 Abstract
This paper addresses low-light video super-resolution (LVSR), aiming to restore high-resolution videos from low-light, low-resolution (LR) inputs. Existing LVSR methods often struggle to recover fine details due to limited contrast and insufficient high-frequency information. To overcome these challenges, we present RetinexEVSR, the first event-driven LVSR framework that leverages high-contrast event signals and Retinex-inspired priors to enhance video quality under low-light scenarios. Unlike previous approaches that directly fuse degraded signals, RetinexEVSR introduces a novel bidirectional cross-modal fusion strategy to extract and integrate meaningful cues from noisy event data and degraded RGB frames. Specifically, an illumination-guided event enhancement module is designed to progressively refine event features using illumination maps derived from the Retinex model, thereby suppressing low-light artifacts while preserving high-contrast details. Furthermore, we propose an event-guided reflectance enhancement module that utilizes the enhanced event features to dynamically recover reflectance details via a multi-scale fusion mechanism. Experimental results show that our RetinexEVSR achieves state-of-the-art performance on three datasets. Notably, on the SDSD benchmark, our method can get up to 2.95 dB gain while reducing runtime by 65% compared to prior event-based methods. Code: https://github.com/DachunKai/RetinexEVSR.