DesignWeaver: Dimensional Scaffolding for Text-to-Image Product Design

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Novice designers, lacking domain expertise, struggle to formulate effective prompts for exploring generative AI’s product design space, resulting in limited output diversity and innovation. To address this, we propose a novel prompt-guidance paradigm that reverse-engineers design dimensions from generated images: by clustering and analyzing visual features of outputs from models such as Stable Diffusion, our method automatically identifies salient design dimensions and constructs an interactive visualization panel to support high-quality prompt construction. This translates expert practices—where visual references scaffold collaborative design—into an operational interface scaffold, bridging the gap between design cognition and generative AI utilization. A user study with 52 participants demonstrates statistically significant improvements in prompt length, domain-specific vocabulary usage, and design diversity and novelty. Furthermore, the study uncovers critical mismatches between current model capabilities and designer expectations, providing empirical grounding and actionable directions for the development of AI-augmented design tools.

Technology Category

Application Category

📝 Abstract
Generative AI has enabled novice designers to quickly create professional-looking visual representations for product concepts. However, novices have limited domain knowledge that could constrain their ability to write prompts that effectively explore a product design space. To understand how experts explore and communicate about design spaces, we conducted a formative study with 12 experienced product designers and found that experts -- and their less-versed clients -- often use visual references to guide co-design discussions rather than written descriptions. These insights inspired DesignWeaver, an interface that helps novices generate prompts for a text-to-image model by surfacing key product design dimensions from generated images into a palette for quick selection. In a study with 52 novices, DesignWeaver enabled participants to craft longer prompts with more domain-specific vocabularies, resulting in more diverse, innovative product designs. However, the nuanced prompts heightened participants' expectations beyond what current text-to-image models could deliver. We discuss implications for AI-based product design support tools.
Problem

Research questions and friction points this paper is trying to address.

Assists novices in prompt generation
Enhances design exploration with visuals
Addresses limitations of text-to-image models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual reference-guided prompt generation
Interface surfacing design dimensions
Enhanced domain-specific vocabulary prompts
🔎 Similar Papers
No similar papers found.
S
Sirui Tao
UC San Diego, La Jolla, CA, USA
I
Ivan Liang
UC San Diego, La Jolla, CA, USA
C
Cindy Peng
Carnegie Mellon University, Pittsburgh, PA, USA
Z
Zhiqing Wang
UC San Diego, La Jolla, CA, USA
Srishti Palani
Srishti Palani
Senior Researcher, Tableau Research
Human-Computer InteractionHuman-Centered AIRecommendation SystemsCognitive Science
Steven Dow
Steven Dow
Professor, Dept of Cognitive Science, Design Lab, UC San Diego
Human-computer interactiondesignsocial computingcollective intelligence