🤖 AI Summary
This work addresses the high computational cost and low inference efficiency of diffusion models in text-to-image generation, particularly for large-scale architectures. The authors propose a novel hybrid inference paradigm that frames image synthesis as an editing process: within a single denoising step, pixels are dynamically assigned to either a small or a large model based on their local complexity. The small model rapidly produces a coarse sketch, while the large model refines only the challenging regions. This approach leverages pixel-level region segmentation and multi-model collaboration to enable fine-grained allocation of computational resources. Evaluated on Stable Diffusion 3, the method achieves a 1.83× speedup over standard inference, substantially outperforming existing model-mixing strategies.
📝 Abstract
Diffusion models have demonstrated a remarkable ability in Text-to-Image (T2I) generation applications. Despite the advanced generation output, they suffer from heavy computation overhead, especially for large models that contain tens of billions of parameters. Prior work has illustrated that replacing part of the denoising steps with a smaller model still maintains the generation quality. However, these methods only focus on saving computation for some timesteps, ignoring the difference in compute demand within one timestep. In this work, we propose HybridStitch, a new T2I generation paradigm that treats generation like editing. Specifically, we introduce a hybrid stage that jointly incorporates both the large model and the small model. HybridStitch separates the entire image into two regions: one that is relatively easy to render, enabling an early transition to the smaller model, and another that is more complex and therefore requires refinement by the large model. HybridStitch employs the small model to construct a coarse sketch while exploiting the large model to edit and refine the complex regions. According to our evaluation, HybridStitch achieves 1.83$\times$ speedup on Stable Diffusion 3, which is faster than all existing mixture of model methods.