π€ AI Summary
To address ambiguous user intent and misaligned feedback in text-to-image generation, this paper proposes a two-stage interactive framework: an initial image generation stage followed by an iterative refinement stage comprising Dialogue-to-Prompt (D2P) conversion, Feedback-driven Reflection (FR), and Adaptive Optimization (AO) of generation parameters. We introduce the first dialogue-driven co-optimization mechanism that dynamically models user intent and enables real-time adjustment of the generation process, preserving prompt fidelity while substantially improving personalized alignment. Experiments demonstrate strong semantic alignment, with CLIP and BLIP scores of 0.338 and 0.336, respectively; a human preference win rate of 33.6%βa 27.4-percentage-point improvement over a GPT-4βenhanced baseline; 88% user satisfaction after eight iterations; and a 40% reduction in required iterations for fashion design tasks.
π Abstract
Although text-to-image generation technologies have made significant advancements, they still face challenges when dealing with ambiguous prompts and aligning outputs with user intent.Our proposed framework, TDRI (Two-Phase Dialogue Refinement and Co-Adaptation), addresses these issues by enhancing image generation through iterative user interaction. It consists of two phases: the Initial Generation Phase, which creates base images based on user prompts, and the Interactive Refinement Phase, which integrates user feedback through three key modules. The Dialogue-to-Prompt (D2P) module ensures that user feedback is effectively transformed into actionable prompts, which improves the alignment between user intent and model input. By evaluating generated outputs against user expectations, the Feedback-Reflection (FR) module identifies discrepancies and facilitates improvements. In an effort to ensure consistently high-quality results, the Adaptive Optimization (AO) module fine-tunes the generation process by balancing user preferences and maintaining prompt fidelity. Experimental results show that TDRI outperforms existing methods by achieving 33.6% human preference, compared to 6.2% for GPT-4 augmentation, and the highest CLIP and BLIP alignment scores (0.338 and 0.336, respectively). In iterative feedback tasks, user satisfaction increased to 88% after 8 rounds, with diminishing returns beyond 6 rounds. Furthermore, TDRI has been found to reduce the number of iterations and improve personalization in the creation of fashion products. TDRI exhibits a strong potential for a wide range of applications in the creative and industrial domains, as it streamlines the creative process and improves alignment with user preferences