Controllable Image Colorization with Instance-aware Texts and Masks

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image colorization methods suffer from color bleeding, erroneous color binding, and lack of instance-level control. This paper introduces the first fine-grained instance-level image colorization framework jointly guided by text descriptions and instance masks. Our approach is built upon a diffusion model architecture and features four key innovations: (1) a pixel-wise mask cross-attention mechanism enabling precise region–color alignment; (2) an instance-mask–text joint self-attention module enhancing semantic consistency; (3) independent sampling and fusion of multiple instances to preserve instance isolation; and (4) the construction of GPT-color—the first benchmark dataset for instance-level colorization. Extensive quantitative and qualitative evaluations demonstrate that our method achieves state-of-the-art performance in color accuracy, instance separation, and text–image alignment, significantly outperforming prior approaches across all metrics.

Technology Category

Application Category

📝 Abstract
Recently, the application of deep learning in image colorization has received widespread attention. The maturation of diffusion models has further advanced the development of image colorization models. However, current mainstream image colorization models still face issues such as color bleeding and color binding errors, and cannot colorize images at the instance level. In this paper, we propose a diffusion-based colorization method MT-Color to achieve precise instance-aware colorization with use-provided guidance. To tackle color bleeding issue, we design a pixel-level mask attention mechanism that integrates latent features and conditional gray image features through cross-attention. We use segmentation masks to construct cross-attention masks, preventing pixel information from exchanging between different instances. We also introduce an instance mask and text guidance module that extracts instance masks and text representations of each instance, which are then fused with latent features through self-attention, utilizing instance masks to form self-attention masks to prevent instance texts from guiding the colorization of other areas, thus mitigating color binding errors. Furthermore, we apply a multi-instance sampling strategy, which involves sampling each instance region separately and then fusing the results. Additionally, we have created a specialized dataset for instance-level colorization tasks, GPT-color, by leveraging large visual language models on existing image datasets. Qualitative and quantitative experiments show that our model and dataset outperform previous methods and datasets.
Problem

Research questions and friction points this paper is trying to address.

Prevent color bleeding in image colorization
Enable instance-level colorization with text guidance
Mitigate color binding errors using mask attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pixel-level mask attention prevents color bleeding
Instance mask and text guidance module
Multi-instance sampling strategy for fusion
🔎 Similar Papers
No similar papers found.
Y
Yanru An
Shanghai Jiao Tong University, China
L
Ling Gui
Shanghai Jiao Tong University, China
Q
Qiang Hu
Shanghai Jiao Tong University, China
Chunlei Cai
Chunlei Cai
Bilibili Inc.
Video compressionImage compressionImage processingDeep learning
Tianxiao Ye
Tianxiao Ye
Bilibili Inc.
X
Xiaoyun Zhang
Shanghai Jiao Tong University, China
Yanfeng Wang
Yanfeng Wang
Shanghai Jiao Tong University