🤖 AI Summary
Text-to-image (T2I) diffusion models often suffer from poor prompt–image semantic alignment and generate distorted or unrealistic images.
Method: This paper proposes a lightweight, latent-space-agnostic adaptive token weighting optimization method. It introduces (1) an online adaptive prompt token weight update mechanism that jointly models object existence and attribute binding; (2) a COO-inspired objective function integrating CLIP-guided dynamic optimization with token-level weighting; and (3) synergistic integration with large prompt-rewriting models to rectify alignment degradation.
Results: On COCO-Subject, the method significantly improves prompt–image alignment, reduces average inference latency by 4 seconds compared to D&B, and enhances both CLIP-IQA-Real realism scores and human-rated visual quality. To our knowledge, this is the first work to demonstrate effective alignment recovery via cooperative prompting and adaptive token weighting without modifying the latent space.
📝 Abstract
Text-to-image (T2I) diffusion models have demonstrated impressive capabilities in generating high-quality images given a text prompt. However, ensuring the prompt-image alignment remains a considerable challenge, i.e., generating images that faithfully align with the prompt's semantics. Recent works attempt to improve the faithfulness by optimizing the latent code, which potentially could cause the latent code to go out-of-distribution and thus produce unrealistic images. In this paper, we propose FRAP, a simple, yet effective approach based on adaptively adjusting the per-token prompt weights to improve prompt-image alignment and authenticity of the generated images. We design an online algorithm to adaptively update each token's weight coefficient, which is achieved by minimizing a unified objective function that encourages object presence and the binding of object-modifier pairs. Through extensive evaluations, we show FRAP generates images with significantly higher prompt-image alignment to prompts from complex datasets, while having a lower average latency compared to recent latent code optimization methods, e.g., 4 seconds faster than D&B on the COCO-Subject dataset. Furthermore, through visual comparisons and evaluation of the CLIP-IQA-Real metric, we show that FRAP not only improves prompt-image alignment but also generates more authentic images with realistic appearances. We also explore combining FRAP with prompt rewriting LLM to recover their degraded prompt-image alignment, where we observe improvements in both prompt-image alignment and image quality. We release the code at the following link: https://github.com/LiyaoJiang1998/FRAP/.