🤖 AI Summary
This work addresses the challenge of precise layout control in text-to-image generation. We propose a training-free sketch-guided diffusion method that leverages user-drawn sketches as structural priors. By introducing an iterative latent-space optimization mechanism grounded in cross-attention maps, our approach dynamically refines noisy latent variables during the denoising process to faithfully reconstruct sketch geometry. Crucially, the method requires no modification to pre-trained diffusion model parameters; instead, it employs joint text-and-sketch conditioning to simultaneously preserve textual semantic fidelity and significantly enhance spatial layout controllability. Our core contribution is the first training-free sketch-guidance paradigm for diffusion models, enabling real-time, interactive editing. This framework offers an efficient, lightweight pathway toward controllable image synthesis—bypassing costly retraining or fine-tuning while achieving high structural adherence to input sketches.
📝 Abstract
Based on recent advanced diffusion models, Text-to-image (T2I) generation models have demonstrated their capabilities to generate diverse and high-quality images. However, leveraging their potential for real-world content creation, particularly in providing users with precise control over the image generation result, poses a significant challenge. In this paper, we propose an innovative training-free pipeline that extends existing text-to-image generation models to incorporate a sketch as an additional condition. To generate new images with a layout and structure closely resembling the input sketch, we find that these core features of a sketch can be tracked with the cross-attention maps of diffusion models. We introduce latent optimization, a method that refines the noisy latent at each intermediate step of the generation process using cross-attention maps to ensure that the generated images adhere closely to the desired structure outlined in the reference sketch. Through latent optimization, our method enhances the accuracy of image generation, offering users greater control and customization options in content creation.