🤖 AI Summary
Video color grading heavily relies on professional expertise, making it inaccessible to non-experts. This paper proposes the first diffusion-based reference-driven video color grading framework, which generates high-quality, frame-accurate, lossless 3D lookup tables (LUTs) for artistic color transformation. Our method jointly leverages high-level style transfer from a reference video and low-level attribute editing—such as contrast and brightness—guided by text prompts, thereby preserving structural integrity while enabling fine-grained user customization. Key contributions include: (i) the first application of diffusion models to LUT generation; (ii) a unified paradigm integrating high-order style alignment with low-order feature controllability; and (iii) support for fast inference and end-to-end optimization. Extensive experiments and a large-scale user study demonstrate that our approach significantly outperforms state-of-the-art methods in visual quality, style fidelity, and preference alignment.
📝 Abstract
Different from color correction and transfer, color grading involves adjusting colors for artistic or storytelling purposes in a video, which is used to establish a specific look or mood. However, due to the complexity of the process and the need for specialized editing skills, video color grading remains primarily the domain of professional colorists. In this paper, we present a reference-based video color grading framework. Our key idea is explicitly generating a look-up table (LUT) for color attribute alignment between reference scenes and input video via a diffusion model. As a training objective, we enforce that high-level features of the reference scenes like look, mood, and emotion should be similar to that of the input video. Our LUT-based approach allows for color grading without any loss of structural details in the whole video frames as well as achieving fast inference. We further build a pipeline to incorporate a user-preference via text prompts for low-level feature enhancement such as contrast and brightness, etc. Experimental results, including extensive user studies, demonstrate the effectiveness of our approach for video color grading. Codes are publicly available at https://github.com/seunghyuns98/VideoColorGrading.