🤖 AI Summary
Existing image colorization methods suffer from color bleeding, erroneous color binding, and lack of instance-level control. This paper introduces the first fine-grained instance-level image colorization framework jointly guided by text descriptions and instance masks. Our approach is built upon a diffusion model architecture and features four key innovations: (1) a pixel-wise mask cross-attention mechanism enabling precise region–color alignment; (2) an instance-mask–text joint self-attention module enhancing semantic consistency; (3) independent sampling and fusion of multiple instances to preserve instance isolation; and (4) the construction of GPT-color—the first benchmark dataset for instance-level colorization. Extensive quantitative and qualitative evaluations demonstrate that our method achieves state-of-the-art performance in color accuracy, instance separation, and text–image alignment, significantly outperforming prior approaches across all metrics.
📝 Abstract
Recently, the application of deep learning in image colorization has received widespread attention. The maturation of diffusion models has further advanced the development of image colorization models. However, current mainstream image colorization models still face issues such as color bleeding and color binding errors, and cannot colorize images at the instance level. In this paper, we propose a diffusion-based colorization method MT-Color to achieve precise instance-aware colorization with use-provided guidance. To tackle color bleeding issue, we design a pixel-level mask attention mechanism that integrates latent features and conditional gray image features through cross-attention. We use segmentation masks to construct cross-attention masks, preventing pixel information from exchanging between different instances. We also introduce an instance mask and text guidance module that extracts instance masks and text representations of each instance, which are then fused with latent features through self-attention, utilizing instance masks to form self-attention masks to prevent instance texts from guiding the colorization of other areas, thus mitigating color binding errors. Furthermore, we apply a multi-instance sampling strategy, which involves sampling each instance region separately and then fusing the results. Additionally, we have created a specialized dataset for instance-level colorization tasks, GPT-color, by leveraging large visual language models on existing image datasets. Qualitative and quantitative experiments show that our model and dataset outperform previous methods and datasets.