π€ AI Summary
Existing text-to-image generation methods are limited in controlling regional layout and occlusion order due to data bias, degraded image quality, and insufficient editing flexibility. This work proposes LayerBindβa training-free, plug-and-play layered generation framework that, for the first time, enables editable region and occlusion control within diffusion Transformers. By leveraging layered instance initialization, context-sharing multimodal attention, layer-wise semantic preservation, and opacity-scheduled attention pathways, LayerBind models independent image layers during early denoising stages. Experiments demonstrate that LayerBind significantly improves both regional control accuracy and occlusion handling while maintaining high image fidelity, thereby supporting flexible and efficient image editing.
π Abstract
Region-instructed layout control in text-to-image generation is highly practical, yet existing methods suffer from limitations: (i) training-based approaches inherit data bias and often degrade image quality, and (ii) current techniques struggle with occlusion order, limiting real-world usability. To address these issues, we propose LayerBind. By modeling regional generation as distinct layers and binding them during the generation, our method enables precise regional and occlusion controllability. Our motivation stems from the observation that spatial layout and occlusion are established at a very early denoising stage, suggesting that rearranging the early latent structure is sufficient to modify the final output. Building on this, we structure the scheme into two phases: instance initialization and subsequent semantic nursing. (1) First, leveraging the contextual sharing mechanism in multimodal joint attention, Layer-wise Instance Initialization creates per-instance branches that attend to their own regions while anchoring to the shared background. At a designated early step, these branches are fused according to the layer order to form a unified latent with a pre-established layout. (2) Then, Layer-wise Semantic Nursing reinforces regional details and maintains the occlusion order via a layer-wise attention enhancement. Specifically, a sequential layered attention path operates alongside the standard global path, with updates composited under a layer-transparency scheduler. LayerBind is training-free and plug-and-play, serving as a regional and occlusion controller across Diffusion Transformers. Beyond generation, it natively supports editable workflows, allowing for flexible modifications like changing instances or rearranging visible orders. Both qualitative and quantitative results demonstrate LayerBind's effectiveness, highlighting its strong potential for creative applications.