Improved Mapping Between Illuminations and Sensors for RAW Images

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
RAW images suffer from severe color casts due to sensor spectral response and illumination variability, resulting in poor cross-device and cross-illumination generalization—making large-scale, multi-condition data collection prohibitively expensive for deep learning. To address this, we introduce the first large-scale multi-sensor–multi-illumination RAW mapping dataset, comprising 390 illuminations × 4 cameras × 18 scenes. We further propose a lightweight neural network that jointly models illumination transformations and sensor spectral responses directly in the RAW domain, enabling high-fidelity cross-illumination and cross-sensor image mapping. Our method is the first to support end-to-end conversion for arbitrary illumination–sensor combinations, drastically reducing data acquisition overhead. It achieves state-of-the-art performance on both illumination and sensor mapping tasks, and significantly improves the training efficacy of downstream neural ISP models.

Technology Category

Application Category

📝 Abstract
RAW images are unprocessed camera sensor output with sensor-specific RGB values based on the sensor's color filter spectral sensitivities. RAW images also incur strong color casts due to the sensor's response to the spectral properties of scene illumination. The sensor- and illumination-specific nature of RAW images makes it challenging to capture RAW datasets for deep learning methods, as scenes need to be captured for each sensor and under a wide range of illumination. Methods for illumination augmentation for a given sensor and the ability to map RAW images between sensors are important for reducing the burden of data capture. To explore this problem, we introduce the first-of-its-kind dataset comprising carefully captured scenes under a wide range of illumination. Specifically, we use a customized lightbox with tunable illumination spectra to capture several scenes with different cameras. Our illumination and sensor mapping dataset has 390 illuminations, four cameras, and 18 scenes. Using this dataset, we introduce a lightweight neural network approach for illumination and sensor mapping that outperforms competing methods. We demonstrate the utility of our approach on the downstream task of training a neural ISP. Link to project page: https://github.com/SamsungLabs/illum-sensor-mapping.
Problem

Research questions and friction points this paper is trying to address.

Mapping RAW images between different camera sensors
Reducing color cast from illumination in RAW images
Enabling illumination augmentation for sensor-specific datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Customized lightbox with tunable illumination spectra
Lightweight neural network for illumination mapping
First-of-its-kind multi-illumination multi-sensor RAW dataset
🔎 Similar Papers
No similar papers found.