🤖 AI Summary
Current text-to-image diffusion models struggle to achieve pixel-level RGB color fidelity. Dominant approaches—such as ColorPeel—rely on model fine-tuning, compromising flexibility and generalizability. This work identifies, for the first time, an implicit binding relationship between color-descriptive text tokens and reference image features within the cross-attention layers of IP-Adapter. Leveraging this insight, we propose a training-free “rewiring” mechanism that remaps feature bindings and decouples semantic attributes, enabling zero-shot, precise injection of arbitrarily specified colors via text. Our method requires no fine-tuning, auxiliary networks, or personalized optimization, preserving both generation quality and diversity. Extensive evaluation across diverse object categories demonstrates substantial improvements in color accuracy and consistency, outperforming ColorPeel and other baselines across all metrics.
📝 Abstract
Recent advances in text-to-image (T2I) diffusion models have enabled remarkable control over various attributes, yet precise color specification remains a fundamental challenge. Existing approaches, such as ColorPeel, rely on model personalization, requiring additional optimization and limiting flexibility in specifying arbitrary colors. In this work, we introduce ColorWave, a novel training-free approach that achieves exact RGB-level color control in diffusion models without fine-tuning. By systematically analyzing the cross-attention mechanisms within IP-Adapter, we uncover an implicit binding between textual color descriptors and reference image features. Leveraging this insight, our method rewires these bindings to enforce precise color attribution while preserving the generative capabilities of pretrained models. Our approach maintains generation quality and diversity, outperforming prior methods in accuracy and applicability across diverse object categories. Through extensive evaluations, we demonstrate that ColorWave establishes a new paradigm for structured, color-consistent diffusion-based image synthesis.