🤖 AI Summary
This paper addresses cross-device and cross-color-space color matching, proposing cmKAN—a unified framework for raw-to-raw, raw-to-sRGB, and sRGB-to-sRGB mapping tasks. Methodologically, it introduces the first hypernetwork-based approach to dynamically generate spatial weight maps that modulate the learnable spline parameters of a Kolmogorov–Arnold Network (KAN), enabling pixel-wise adaptive color correction. To support training, the authors construct the first large-scale paired dual-camera image dataset, enabling joint supervised and unsupervised learning. Experiments demonstrate that cmKAN achieves a 37.3% average PSNR improvement over state-of-the-art methods across multiple tasks, while maintaining high accuracy and computational efficiency. The code, dataset, and pretrained models are publicly released.
📝 Abstract
We present cmKAN, a versatile framework for color matching. Given an input image with colors from a source color distribution, our method effectively and accurately maps these colors to match a target color distribution in both supervised and unsupervised settings. Our framework leverages the spline capabilities of Kolmogorov-Arnold Networks (KANs) to model the color matching between source and target distributions. Specifically, we developed a hypernetwork that generates spatially varying weight maps to control the nonlinear splines of a KAN, enabling accurate color matching. As part of this work, we introduce a first large-scale dataset of paired images captured by two distinct cameras and evaluate the efficacy of our and existing methods in matching colors. We evaluated our approach across various color-matching tasks, including: (1) raw-to-raw mapping, where the source color distribution is in one camera's raw color space and the target in another camera's raw space; (2) raw-to-sRGB mapping, where the source color distribution is in a camera's raw space and the target is in the display sRGB space, emulating the color rendering of a camera ISP; and (3) sRGB-to-sRGB mapping, where the goal is to transfer colors from a source sRGB space (e.g., produced by a source camera ISP) to a target sRGB space (e.g., from a different camera ISP). The results show that our method outperforms existing approaches by 37.3% on average for supervised and unsupervised cases while remaining lightweight compared to other methods. The codes, dataset, and pre-trained models are available at: https://github.com/gosha20777/cmKAN