Recolour What Matters: Region-Aware Colour Editing via Token-Level Diffusion

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of precise continuous hue control in fine-grained local color editing with diffusion models, which is often hindered by the limited expressiveness of discrete text prompts. To overcome this, we propose ColourCrafter, a unified region-aware diffusion framework that enables semantically consistent local recoloring by fusing RGB color tokens with image tokens in the latent space. Key innovations include a token-level color-image fusion mechanism, a perceptually motivated loss function formulated in the Lab color space, mask-constrained optimization, and ColourfulSet—a large-scale dataset of continuous color-paired images. Extensive experiments demonstrate that ColourCrafter significantly outperforms existing text-driven and global colorization methods in terms of color accuracy, controllability, and perceptual fidelity, achieving state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Colour is one of the most perceptually salient yet least controllable attributes in image generation. Although recent diffusion models can modify object colours from user instructions, their results often deviate from the intended hue, especially for fine-grained and local edits. Early text-driven methods rely on discrete language descriptions that cannot accurately represent continuous chromatic variations. To overcome this limitation, we propose ColourCrafter, a unified diffusion framework that transforms colour editing from global tone transfer into a structured, region-aware generation process. Unlike traditional colour driven methods, ColourCrafter performs token-level fusion of RGB colour tokens and image tokens in latent space, selectively propagating colour information to semantically relevant regions while preserving structural fidelity. A perceptual Lab-space Loss further enhances pixel-level precision by decoupling luminance and chrominance and constraining edits within masked areas. Additionally, we build ColourfulSet, a largescale dataset of high-quality image pairs with continuous and diverse colour variations. Extensive experiments demonstrate that ColourCrafter achieves state-of-the-art colour accuracy, controllability and perceptual fidelity in fine-grained colour editing. Our project is available at https://yangyuqi317.github.io/ColourCrafter.github.io/.
Problem

Research questions and friction points this paper is trying to address.

colour editing
region-aware
diffusion models
fine-grained control
continuous colour variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

token-level diffusion
region-aware colour editing
Lab-space loss
colour token fusion
ColourfulSet
🔎 Similar Papers
No similar papers found.
Yuqi Yang
Yuqi Yang
Nankai University
Computer VisionSemantic Segmentation
D
Dongliang Chang
Beijing University of Posts and Telecommunications, Beijing 100876, China; Beijing Key Laboratory of Multimodal Data Intelligent Perception and Governance, Beijing 100876, China
Y
Yijia Ling
Beijing University of Posts and Telecommunications, Beijing 100876, China; Beijing Key Laboratory of Multimodal Data Intelligent Perception and Governance, Beijing 100876, China
R
Ruoyi Du
Beijing University of Posts and Telecommunications, Beijing 100876, China
Zhanyu Ma
Zhanyu Ma
Beijing University of Posts and Telecommunications
Pattern RecognitionMachine LearningComputer VisionMultimedia TechnologyDeep Learning