🤖 AI Summary
Multi-camera raw-to-raw color inconsistency—arising from sensor response and optical system variations—causes poor ISP compatibility and image fusion failure; existing methods lack robustness to illumination changes and rely on paired or synchronized data. Method: We propose a lightweight Neural Physical Model (NPM) embedding differentiable physical imaging priors, enabling unpaired training, illumination-adaptive inference, and physics-guided initialization. It leverages illumination-conditioned raw simulation and joint weakly supervised/self-supervised optimization. Contribution/Results: On NUS and BeyondRGB benchmarks, NPM outperforms state-of-the-art methods: chromatic error in the cross-device raw domain is reduced by 32.7%; it further improves generalization and computational efficiency (<1.2G FLOPs), establishing a new paradigm for edge-deployable, multi-camera ISP co-processing.
📝 Abstract
Achieving consistent color reproduction across multiple cameras is essential for seamless image fusion and Image Processing Pipeline (ISP) compatibility in modern devices, but it is a challenging task due to variations in sensors and optics. Existing raw-to-raw conversion methods face limitations such as poor adaptability to changing illumination, high computational costs, or impractical requirements such as simultaneous camera operation and overlapping fields-of-view. We introduce the Neural Physical Model (NPM), a lightweight, physically-informed approach that simulates raw images under specified illumination to estimate transformations between devices. The NPM effectively adapts to varying illumination conditions, can be initialized with physical measurements, and supports training with or without paired data. Experiments on public datasets like NUS and BeyondRGB demonstrate that NPM outperforms recent state-of-the-art methods, providing robust chromatic consistency across different sensors and optical systems.