🤖 AI Summary
Low-light video enhancement (LLVE) suffers from severe challenges including non-uniform illumination, strong noise, and inter-frame flickering. To address these, this paper proposes a spatiotemporally consistent decomposition framework that, for the first time, disentangles video frames into two complementary components: view-invariant (intrinsic appearance) and view-dependent (illumination/shadow). Dynamic cross-frame feature matching establishes inter-frame correspondences, while scene-level continuity constraints and a dual-branch interactive enhancement network jointly model spatiotemporal consistency within a single-frame encoder-decoder architecture. The method employs joint multi-frame supervision without explicit photometric or motion modeling. Evaluated on multiple mainstream LLVE benchmarks, it achieves state-of-the-art performance with negligible parameter overhead and strong generalization capability.
📝 Abstract
Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise. In this paper, we present an innovative video decomposition strategy that incorporates view-independent and view-dependent components to enhance the performance of LLVE. We leverage dynamic cross-frame correspondences for the view-independent term (which primarily captures intrinsic appearance) and impose a scene-level continuity constraint on the view-dependent term (which mainly describes the shading condition) to achieve consistent and satisfactory decomposition results. To further ensure consistent decomposition, we introduce a dual-structure enhancement network featuring a cross-frame interaction mechanism. By supervising different frames simultaneously, this network encourages them to exhibit matching decomposition features. This mechanism can seamlessly integrate with encoder-decoder single-frame networks, incurring minimal additional parameter costs. Extensive experiments are conducted on widely recognized LLVE benchmarks, covering diverse scenarios. Our framework consistently outperforms existing methods, establishing a new SOTA performance.