GenTune: Toward Traceable Prompts to Improve Controllability of Image Refinement in Environment Design

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In environmental design, generative AI–assisted image creation faces dual challenges: poor local detail control and weak global consistency—long LLM-expanded prompts hinder precise identification of key visual terms, while localized inpainting often disrupts semantic coherence. This paper proposes an interactive image optimization framework centered on a traceable bidirectional mapping mechanism between prompt tokens and image regions. Designers can click any image region to reverse-locate and edit its corresponding prompt token, enabling fine-grained, semantically coherent iterative refinement. The method integrates LLM-based prompt enhancement, text-to-image generation, and controllable localized inpainting. A user study (N=20) and field deployments across two professional studios demonstrate statistically significant improvements in prompt interpretability, editing accuracy, and workflow efficiency (p < .01).

Technology Category

Application Category

📝 Abstract
Environment designers in the entertainment industry create imaginative 2D and 3D scenes for games, films, and television, requiring both fine-grained control of specific details and consistent global coherence. Designers have increasingly integrated generative AI into their workflows, often relying on large language models (LLMs) to expand user prompts for text-to-image generation, then iteratively refining those prompts and applying inpainting. However, our formative study with 10 designers surfaced two key challenges: (1) the lengthy LLM-generated prompts make it difficult to understand and isolate the keywords that must be revised for specific visual elements; and (2) while inpainting supports localized edits, it can struggle with global consistency and correctness. Based on these insights, we present GenTune, an approach that enhances human--AI collaboration by clarifying how AI-generated prompts map to image content. Our GenTune system lets designers select any element in a generated image, trace it back to the corresponding prompt labels, and revise those labels to guide precise yet globally consistent image refinement. In a summative study with 20 designers, GenTune significantly improved prompt--image comprehension, refinement quality, and efficiency, and overall satisfaction (all $p < .01$) compared to current practice. A follow-up field study with two studios further demonstrated its effectiveness in real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Improving traceability of AI-generated prompts for image refinement
Enhancing controllability over specific visual elements in environment design
Maintaining global consistency during localized image editing processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Traceable prompts linking image elements to labels
Selectable image elements for targeted prompt revision
AI-human collaboration for precise global refinement
🔎 Similar Papers
No similar papers found.
W
Wen-Fan Wang
National Taiwan University, Taipei, Taiwan
T
Ting-Ying Lee
National Taiwan University, Taipei, Taiwan
C
Chien-Ting Lu
National Taiwan University, Taipei, Taiwan
C
Che-Wei Hsu
National Taiwan University, Taipei, Taiwan
N
Nil Ponsa Campany
National Taiwan University, Taipei, Taiwan
Y
Yu Chen
National Taiwan University, Taipei, Taiwan
M
Mike Y. Chen
National Taiwan University, Taipei, Taiwan
Bing-Yu Chen
Bing-Yu Chen
National Taiwan University
Computer GraphicsHuman-Computer Interaction