Leveraging Multispectral Sensors for Color Correction in Mobile Cameras

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited color correction accuracy of mobile device cameras constrained by single-modality RGB input, this paper proposes an end-to-end jointly optimized framework that fuses high-resolution RGB with low-resolution multispectral sensor data. Unlike conventional approaches relying on hand-crafted priors or feature concatenation, our method preserves the full multispectral information flow throughout the pipeline and unifies the modeling of sensor response, spectral reconstruction, and color mapping—enabling seamless integration with state-of-the-art image architectures. Trained on a custom multispectral rendering dataset, the model achieves computational efficiency while significantly improving cross-device robustness. Experiments demonstrate a reduction of up to 50% in mean color error (ΔE₀₀) compared to the best RGB-only baseline, outperforming methods using multispectral priors alone, and exhibiting superior stability across diverse hardware spectral responses.

Technology Category

Application Category

📝 Abstract
Recent advances in snapshot multispectral (MS) imaging have enabled compact, low-cost spectral sensors for consumer and mobile devices. By capturing richer spectral information than conventional RGB sensors, these systems can enhance key imaging tasks, including color correction. However, most existing methods treat the color correction pipeline in separate stages, often discarding MS data early in the process. We propose a unified, learning-based framework that (i) performs end-to-end color correction and (ii) jointly leverages data from a high-resolution RGB sensor and an auxiliary low-resolution MS sensor. Our approach integrates the full pipeline within a single model, producing coherent and color-accurate outputs. We demonstrate the flexibility and generality of our framework by refactoring two different state-of-the-art image-to-image architectures. To support training and evaluation, we construct a dedicated dataset by aggregating and repurposing publicly available spectral datasets, rendering under multiple RGB camera sensitivities. Extensive experiments show that our approach improves color accuracy and stability, reducing error by up to 50% compared to RGB-only and MS-driven baselines. Datasets, code, and models will be made available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Unifies color correction with multispectral and RGB data
Integrates full pipeline in a single learning-based model
Improves color accuracy by up to 50% over baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified learning-based framework for end-to-end color correction
Jointly leverages high-resolution RGB and low-resolution multispectral sensors
Integrates full pipeline within a single model for accuracy
🔎 Similar Papers
No similar papers found.
L
Luca Cogo
University of Milano-Bicocca
Marco Buzzelli
Marco Buzzelli
University of Milano – Bicocca
computer visionimage processingmachine learning
S
Simone Bianco
University of Milano-Bicocca
J
Javier Vazquez-Corral
Computer Vision Center, Universitat Autònoma de Barcelona
Raimondo Schettini
Raimondo Schettini
University of Milano Bicocca
color imagingartificial visionartificial intelligenceimage understanding