🤖 AI Summary
Existing text-to-image generation methods struggle to directly control lighting conditions, often relying on inefficient two-stage relighting pipelines that require extensive fine-tuning. This work proposes a novel approach that achieves fine-grained, dynamic lighting control without any training or modification of pre-trained diffusion models. By analyzing channel-wise structures in the latent space and selectively manipulating the initial noise according to user-specified lighting directions alongside textual prompts, the method enables precise illumination guidance during image synthesis. It is compatible with frameworks such as ControlNet and maintains high image fidelity and strong text alignment while significantly outperforming prompt-based baselines, particularly in terms of lighting consistency.
📝 Abstract
Diffusion models have demonstrated high-quality performance in conditional text-to-image generation, particularly with structural cues such as edges, layouts, and depth. However, lighting conditions have received limited attention and remain difficult to control within the generative process. Existing methods handle lighting through a two-stage pipeline that relights images after generation, which is inefficient. Moreover, they rely on fine-tuning with large datasets and heavy computation, limiting their adaptability to new models and tasks. To address this, we propose a novel Training-Free Light-Guided Text-to-Image Diffusion Model via Initial Noise Manipulation (LGTM), which manipulates the initial latent noise of the diffusion process to guide image generation with text prompts and user-specified light directions. Through a channel-wise analysis of the latent space, we find that selectively manipulating latent channels enables fine-grained lighting control without fine-tuning or modifying the pre-trained model. Extensive experiments show that our method surpasses prompt-based baselines in lighting consistency, while preserving image quality and text alignment. This approach introduces new possibilities for dynamic, user-guided light control. Furthermore, it integrates seamlessly with models like ControlNet, demonstrating adaptability across diverse scenarios.