🤖 AI Summary
Rolling shutter artifacts coupled with AC illumination—manifesting as periodic dark bands—severely degrade image quality and impair robustness of downstream vision tasks (e.g., detection, tracking) in short-exposure imaging. Existing studies are hindered by the absence of large-scale, motion-rich benchmark datasets containing realistic flicker degradation. To address this, we introduce FlickerBench, the first dynamic-scene-oriented flicker artifact removal benchmark. It innovatively integrates a Retinex theory-driven controllable flash synthesis pipeline, real-world AC-powered illumination capture, and green-screen-based motion composition—enabling diverse, physically plausible motion while preserving authentic degradation characteristics. FlickerBench comprises over 4,000 high-quality real flicker images. Extensive experiments demonstrate substantial improvements in model generalization and flicker suppression performance. By providing realistic, motion-aware flicker data, FlickerBench establishes a rigorous foundation for flicker modeling, analysis, and removal research.
📝 Abstract
Flicker artifacts in short-exposure images are caused by the interplay between the row-wise exposure mechanism of rolling shutter cameras and the temporal intensity variations of alternating current (AC)-powered lighting. These artifacts typically appear as uneven brightness distribution across the image, forming noticeable dark bands. Beyond compromising image quality, this structured noise also affects high-level tasks, such as object detection and tracking, where reliable lighting is crucial. Despite the prevalence of flicker, the lack of a large-scale, realistic dataset has been a significant barrier to advancing research in flicker removal. To address this issue, we present BurstDeflicker, a scalable benchmark constructed using three complementary data acquisition strategies. First, we develop a Retinex-based synthesis pipeline that redefines the goal of flicker removal and enables controllable manipulation of key flicker-related attributes (e.g., intensity, area, and frequency), thereby facilitating the generation of diverse flicker patterns. Second, we capture 4,000 real-world flicker images from different scenes, which help the model better understand the spatial and temporal characteristics of real flicker artifacts and generalize more effectively to wild scenarios. Finally, due to the non-repeatable nature of dynamic scenes, we propose a green-screen method to incorporate motion into image pairs while preserving real flicker degradation. Comprehensive experiments demonstrate the effectiveness of our dataset and its potential to advance research in flicker removal.