Reward-Agnostic Prompt Optimization for Text-to-Image Diffusion Models

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing prompt optimization methods for text-to-image (T2I) diffusion models—namely, their dependence on task-specific reward models and poor generalizability. We propose RATTPO, a reward-agnostic test-time prompt optimization framework grounded in large language models (LLMs). Without altering model parameters, RATTPO iteratively refines prompts via reward-aware contextual feedback and leverages optimization trajectories to generate lightweight “hint” signals, eliminating reliance on handcrafted reward instructions. Its core contribution is the first reward-agnostic test-time prompt optimization paradigm. Experiments demonstrate that RATTPO consistently improves generation quality across heterogeneous reward criteria—including aesthetic scoring, human preference, and spatial relationship fidelity—while achieving 3.5× higher search efficiency than baseline methods and matching the performance of fine-tuning-based approaches.

Technology Category

Application Category

📝 Abstract
We investigate a general approach for improving user prompts in text-to-image (T2I) diffusion models by finding prompts that maximize a reward function specified at test-time. Although diverse reward models are used for evaluating image generation, existing automated prompt engineering methods typically target specific reward configurations. Consequently, these specialized designs exhibit suboptimal performance when applied to new prompt engineering scenarios involving different reward models. To address this limitation, we introduce RATTPO (Reward-Agnostic Test-Time Prompt Optimization), a flexible test-time optimization method applicable across various reward scenarios without modification. RATTPO iteratively searches for optimized prompts by querying large language models (LLMs) extit{without} requiring reward-specific task descriptions. Instead, it uses the optimization trajectory and a novel reward-aware feedback signal (termed a"hint") as context. Empirical results demonstrate the versatility of RATTPO, effectively enhancing user prompts across diverse reward setups that assess various generation aspects, such as aesthetics, general human preference, or spatial relationships between objects. RATTPO surpasses other test-time search baselines in search efficiency, using up to 3.5 times less inference budget, and, given sufficient inference budget, achieves performance comparable to learning-based baselines that require reward-specific fine-tuning. The code is available at https://github.com/seminkim/RATTPO.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts for diverse reward models in text-to-image generation
Improving prompt engineering without reward-specific task descriptions
Enhancing search efficiency and versatility in test-time prompt optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward-agnostic prompt optimization for T2I models
Uses LLMs without reward-specific task descriptions
Employs reward-aware feedback signal for efficiency
🔎 Similar Papers
No similar papers found.
S
Semin Kim
KAIST
Y
Yeonwoo Cha
KAIST
Jaehoon Yoo
Jaehoon Yoo
KAIST CS
Computer Vision
S
Seunghoon Hong
KAIST