🤖 AI Summary
This work addresses three key challenges in language-guided image colorization: strong color ambiguity, poor user controllability, and the absence of a systematic evaluation benchmark. To this end, we introduce the first comprehensive benchmark and standardized evaluation framework specifically designed for this task. Methodologically, we propose a lightweight distilled diffusion architecture that integrates CLIP’s pre-trained multimodal alignment capabilities with a cross-modal feature fusion mechanism, enabling high-fidelity chromatic reconstruction and fine-grained semantic control under text guidance. Experimental results demonstrate that our approach surpasses existing complex models in colorization quality while achieving a 14× speedup in inference time. We publicly release all code, datasets, and evaluation tools—filling a critical gap in the field and providing a reproducible, extensible foundation for future research.
📝 Abstract
Image colorization aims to bring colors back to grayscale images. Automatic image colorization methods, which requires no additional guidance, struggle to generate high-quality images due to color ambiguity, and provides limited user controllability. Thanks to the emergency of cross-modality datasets and models, language-based colorization methods are proposed to fully utilize the efficiency and flexibly of text descriptions to guide colorization. In view of the lack of a comprehensive review of language-based colorization literature, we conduct a thorough analysis and benchmarking. We first briefly summarize existing automatic colorization methods. Then, we focus on language-based methods and point out their core challenge on cross-modal alignment. We further divide these methods into two categories: one attempts to train a cross-modality network from scratch, while the other utilizes the pre-trained cross-modality model to establish the textual-visual correspondence. Based on the analyzed limitations of existing language-based methods, we propose a simple yet effective method based on distilled diffusion model. Extensive experiments demonstrate that our simple baseline can produces better results than previous complex methods with 14 times speed up. To the best of our knowledge, this is the first comprehensive review and benchmark on language-based image colorization field, providing meaningful insights for the community. The code is available at https://github.com/lyf1212/Color-Turbo.