Improving Compositional Generation with Diffusion Models Using Lift Scores

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low conditional alignment rate in diffusion models for multi-condition compositional generation, this paper proposes a training-free, lift-score-driven resampling method. It approximates satisfaction scores for individual conditions and combines them via lift score to infer overall prompt consistency, enabling efficient and interpretable compositional reasoning. This work is the first to introduce lift score into diffusion-based compositional generation—requiring no additional parameters, auxiliary modules, or fine-tuning. Evaluated on 2D synthetic data, CLEVR spatial reasoning, and text-to-image generation, the method achieves significant improvements in conditional alignment (average +12.7%) while incurring minimal inference overhead (<5% additional sampling steps). The core contribution is a lightweight, general-purpose, plug-and-play conditional alignment paradigm for diffusion models.

Technology Category

Application Category

📝 Abstract
We introduce a novel resampling criterion using lift scores, for improving compositional generation in diffusion models. By leveraging the lift scores, we evaluate whether generated samples align with each single condition and then compose the results to determine whether the composed prompt is satisfied. Our key insight is that lift scores can be efficiently approximated using only the original diffusion model, requiring no additional training or external modules. We develop an optimized variant that achieves relatively lower computational overhead during inference while maintaining effectiveness. Through extensive experiments, we demonstrate that lift scores significantly improved the condition alignment for compositional generation across 2D synthetic data, CLEVR position tasks, and text-to-image synthesis. Our code is available at http://github.com/rainorangelemon/complift.
Problem

Research questions and friction points this paper is trying to address.

Improving compositional generation in diffusion models
Evaluating sample alignment with conditions using lift scores
Achieving condition alignment without additional training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lift scores resampling for compositional generation
No extra training, uses original diffusion model
Optimized variant reduces computational overhead
🔎 Similar Papers
No similar papers found.