GenColor: Generative Color-Concept Association in Visual Design

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing color-concept association methods rely on unstable image references, limiting contextual adaptability (e.g., “clear” vs. “polluted” skies) and failing to distinguish primary/accent color roles. This paper introduces the first generative color-concept semantic mapping framework that eliminates image-based references entirely. Instead, it pioneers the use of text-to-image diffusion models to uncover semantically resonant colors for abstract concepts. Our three-stage pipeline—concept instantiation, CLIP-guided text-driven image segmentation, and region-aware weighted color extraction—enables context-sensitive, designer-level palette generation. Integrating multi-scale color space analysis, the method achieves quantitative performance on par with professional designers. We validate its efficacy across UI, branding, and environmental design applications and deploy an interactive color gallery.

Technology Category

Application Category

📝 Abstract
Existing approaches for color-concept association typically rely on query-based image referencing, and color extraction from image references. However, these approaches are effective only for common concepts, and are vulnerable to unstable image referencing and varying image conditions. Our formative study with designers underscores the need for primary-accent color compositions and context-dependent colors (e.g., 'clear' vs. 'polluted' sky) in design. In response, we introduce a generative approach for mining semantically resonant colors leveraging images generated by text-to-image models. Our insight is that contemporary text-to-image models can resemble visual patterns from large-scale real-world data. The framework comprises three stages: concept instancing produces generative samples using diffusion models, text-guided image segmentation identifies concept-relevant regions within the image, and color association extracts primarily accompanied by accent colors. Quantitative comparisons with expert designs validate our approach's effectiveness, and we demonstrate the applicability through cases in various design scenarios and a gallery.
Problem

Research questions and friction points this paper is trying to address.

Existing methods fail for uncommon concepts and unstable image references.
Designers need context-dependent colors and primary-accent color compositions.
Proposed generative approach mines semantically resonant colors using text-to-image models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative color mining using text-to-image models
Text-guided image segmentation for concept relevance
Primary-accent color extraction from generative samples
🔎 Similar Papers
No similar papers found.