DreamDistribution: Learning Prompt Distribution for Diverse In-distribution Generation

📅 2023-12-21
📈 Citations: 7
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses personalized generation in text-to-image (T2I) diffusion models by learning a *sampleable soft prompt distribution*—rather than a single fixed prompt—to jointly achieve concept-level personalization and image diversity. Building upon pretrained T2I models, our method employs variational inference and distribution alignment to optimize the prompt embedding distribution, incorporating CLIP feature constraints and reparameterized sampling. This enables cross-distribution prompt mixing, text-guided editing, and seamless transfer to downstream multimodal tasks such as text-to-3D. Extensive evaluation demonstrates significant improvements over baselines—including Prompt Tuning and DreamBooth—across FID, LPIPS, diversity metrics, and human evaluation. Our approach simultaneously enhances generation fidelity, semantic consistency with input prompts, and cross-modal generalization capability.
📝 Abstract
The popularization of Text-to-Image (T2I) diffusion models enables the generation of high-quality images from text descriptions. However, generating diverse customized images with reference visual attributes remains challenging. This work focuses on personalizing T2I diffusion models at a more abstract concept or category level, adapting commonalities from a set of reference images while creating new instances with sufficient variations. We introduce a solution that allows a pretrained T2I diffusion model to learn a set of soft prompts, enabling the generation of novel images by sampling prompts from the learned distribution. These prompts offer text-guided editing capabilities and additional flexibility in controlling variation and mixing between multiple distributions. We also show the adaptability of the learned prompt distribution to other tasks, such as text-to-3D. Finally we demonstrate effectiveness of our approach through quantitative analysis including automatic evaluation and human assessment. Project website: https://briannlongzhao.github.io/DreamDistribution
Problem

Research questions and friction points this paper is trying to address.

Generating diverse customized images with reference attributes
Personalizing T2I models at abstract concept level
Learning prompt distribution for varied in-distribution generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning soft prompts for diverse image generation
Sampling prompts from learned distribution for variations
Adapting prompt distribution to text-to-3D tasks
🔎 Similar Papers
No similar papers found.