🤖 AI Summary
To address the underutilization of mask sparsity in diffusion-based mask-guided image editing services, this paper proposes MaskFlow—a production-oriented inference optimization framework. Methodologically, MaskFlow introduces three key innovations: (1) a novel mask-aware intermediate activation caching and reuse mechanism that skips computation for unedited regions; (2) a bubble-free compute-load overlapping pipeline to alleviate GPU memory bandwidth bottlenecks; and (3) single-step integrated continuous batching coupled with dynamic load balancing guided by heterogeneous mask workload modeling. Experimental evaluation demonstrates that, while strictly preserving image quality, MaskFlow achieves up to 3.0× higher system throughput and reduces average end-to-end latency by up to 14.7×—significantly outperforming state-of-the-art approaches.
📝 Abstract
Generative image editing using diffusion models has become a prevalent application in today's AI cloud services. In production environments, image editing typically involves a mask that specifies the regions of an image template to be edited. The use of masks provides direct control over the editing process and introduces sparsity in the model inference. In this paper, we present InstGenIE, a system that efficiently serves image editing requests. The key insight behind InstGenIE is that image editing only modifies the masked regions of image templates while preserving the original content in the unmasked areas. Driven by this insight, InstGenIE judiciously skips redundant computations associated with the unmasked areas by reusing cached intermediate activations from previous inferences. To mitigate the high cache loading overhead, InstGenIE employs a bubble-free pipeline scheme that overlaps computation with cache loading. Additionally, to reduce queuing latency in online serving while improving the GPU utilization, InstGenIE proposes a novel continuous batching strategy for diffusion model serving, allowing newly arrived requests to join the running batch in just one step of denoising computation, without waiting for the entire batch to complete. As heterogeneous masks induce imbalanced loads, InstGenIE also develops a load balancing strategy that takes into account the loads of both computation and cache loading. Collectively, InstGenIE outperforms state-of-the-art diffusion serving systems for image editing, achieving up to 3x higher throughput and reducing average request latency by up to 14.7x while ensuring image quality.