🤖 AI Summary
To address the poor generalization of color constancy methods across cameras with disparate spectral responses, this paper proposes a diffusion-based universal illumination estimation framework. Methodologically: (1) a single-step deterministic denoising inference mechanism is designed to enhance computational efficiency and stability; (2) Laplacian decomposition constraints are introduced to preserve image structure while enabling illumination-adaptive chromatic rendering; and (3) a mask-driven weakly supervised data augmentation strategy is developed to mitigate the scarcity of real-world color chart annotations. Evaluated on a bidirectional cross-camera benchmark, the method achieves worst-case 25%-percentile angular errors of 5.15° and 4.32°, significantly outperforming state-of-the-art approaches. Crucially, it requires no camera-specific training, demonstrating strong cross-camera generalizability and practical applicability.
📝 Abstract
Color constancy methods often struggle to generalize across different camera sensors due to varying spectral sensitivities. We present GCC, which leverages diffusion models to inpaint color checkers into images for illumination estimation. Our key innovations include (1) a single-step deterministic inference approach that inpaints color checkers reflecting scene illumination, (2) a Laplacian decomposition technique that preserves checker structure while allowing illumination-dependent color adaptation, and (3) a mask-based data augmentation strategy for handling imprecise color checker annotations. GCC demonstrates superior robustness in cross-camera scenarios, achieving state-of-the-art worst-25% error rates of 5.15{deg} and 4.32{deg} in bi-directional evaluations. These results highlight our method's stability and generalization capability across different camera characteristics without requiring sensor-specific training, making it a versatile solution for real-world applications.