🤖 AI Summary
Current single-image reflection removal (SIRR) research is hindered by the lack of large-scale, high-quality, real-world benchmark datasets. To address this, we introduce the first large-scale, in-the-wild SIRR benchmark—comprising 5,300 pixel-accurately aligned reflection/no-reflection image pairs—spanning diverse illumination conditions, object materials, and reflection patterns; it further includes 100 real-world, ground-truth-free images for generalization evaluation. A rigorously controlled acquisition pipeline ensures data fidelity. We propose an end-to-end U-Net-based removal model and comprehensively evaluate performance using five metrics: PSNR, SSIM, LPIPS, DISTS, and NIQE. Experiments confirm the dataset’s validity and establish a robust baseline. All data and code are publicly released to advance standardization and practical deployment of SIRR.
📝 Abstract
Removing reflections is a crucial task in computer vision, with significant applications in photography and image enhancement. Nevertheless, existing methods are constrained by the absence of large-scale, high-quality, and diverse datasets. In this paper, we present a novel benchmark for Single Image Reflection Removal (SIRR). We have developed a large-scale dataset containing 5,300 high-quality, pixel-aligned image pairs, each consisting of a reflection image and its corresponding clean version. Specifically, the dataset is divided into two parts: 5,000 images are used for training, and 300 images are used for validation. Additionally, we have included 100 real-world testing images without ground truth (GT) to further evaluate the practical performance of reflection removal methods. All image pairs are precisely aligned at the pixel level to guarantee accurate supervision. The dataset encompasses a broad spectrum of real-world scenarios, featuring various lighting conditions, object types, and reflection patterns, and is segmented into training, validation, and test sets to facilitate thorough evaluation. To validate the usefulness of our dataset, we train a U-Net-based model and evaluate it using five widely-used metrics, including PSNR, SSIM, LPIPS, DISTS, and NIQE. We will release both the dataset and the code on https://github.com/caijie0620/OpenRR-5k to facilitate future research in this field.