🤖 AI Summary
Existing text-to-image diffusion models rely on manually crafted textual prompts for image editing, which often introduce irrelevant details and suffer from low efficiency. This paper proposes a zero-shot, training-free classifier-guided semantic optimization framework: it leverages pretrained attribute classifiers to learn disentangled semantic embeddings in the diffusion latent space and enables precise intervention in the generation process via gradient-free semantic projection. Crucially, the method requires no model parameter modification and operates entirely without textual prompts. We theoretically prove that the learned semantic embeddings constitute optimal attribute representations under the given classifier constraints. Extensive experiments across diverse domains demonstrate strong generalization and high-fidelity, disentangled semantic editing—significantly outperforming prompt-based approaches in both controllability and fidelity.
📝 Abstract
Text-to-image diffusion models have emerged as powerful tools for high-quality image generation and editing. Many existing approaches rely on text prompts as editing guidance. However, these methods are constrained by the need for manual prompt crafting, which can be time-consuming, introduce irrelevant details, and significantly limit editing performance. In this work, we propose optimizing semantic embeddings guided by attribute classifiers to steer text-to-image models toward desired edits, without relying on text prompts or requiring any training or fine-tuning of the diffusion model. We utilize classifiers to learn precise semantic embeddings at the dataset level. The learned embeddings are theoretically justified as the optimal representation of attribute semantics, enabling disentangled and accurate edits. Experiments further demonstrate that our method achieves high levels of disentanglement and strong generalization across different domains of data.