InstGenIE: Generative Image Editing Made Efficient with Mask-aware Caching and Scheduling

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the underutilization of mask sparsity in diffusion-based mask-guided image editing services, this paper proposes MaskFlow—a production-oriented inference optimization framework. Methodologically, MaskFlow introduces three key innovations: (1) a novel mask-aware intermediate activation caching and reuse mechanism that skips computation for unedited regions; (2) a bubble-free compute-load overlapping pipeline to alleviate GPU memory bandwidth bottlenecks; and (3) single-step integrated continuous batching coupled with dynamic load balancing guided by heterogeneous mask workload modeling. Experimental evaluation demonstrates that, while strictly preserving image quality, MaskFlow achieves up to 3.0× higher system throughput and reduces average end-to-end latency by up to 14.7×—significantly outperforming state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Generative image editing using diffusion models has become a prevalent application in today's AI cloud services. In production environments, image editing typically involves a mask that specifies the regions of an image template to be edited. The use of masks provides direct control over the editing process and introduces sparsity in the model inference. In this paper, we present InstGenIE, a system that efficiently serves image editing requests. The key insight behind InstGenIE is that image editing only modifies the masked regions of image templates while preserving the original content in the unmasked areas. Driven by this insight, InstGenIE judiciously skips redundant computations associated with the unmasked areas by reusing cached intermediate activations from previous inferences. To mitigate the high cache loading overhead, InstGenIE employs a bubble-free pipeline scheme that overlaps computation with cache loading. Additionally, to reduce queuing latency in online serving while improving the GPU utilization, InstGenIE proposes a novel continuous batching strategy for diffusion model serving, allowing newly arrived requests to join the running batch in just one step of denoising computation, without waiting for the entire batch to complete. As heterogeneous masks induce imbalanced loads, InstGenIE also develops a load balancing strategy that takes into account the loads of both computation and cache loading. Collectively, InstGenIE outperforms state-of-the-art diffusion serving systems for image editing, achieving up to 3x higher throughput and reducing average request latency by up to 14.7x while ensuring image quality.
Problem

Research questions and friction points this paper is trying to address.

Efficient generative image editing with masked regions
Reducing redundant computations via cached activations
Optimizing batch processing and load balancing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mask-aware caching skips redundant unmasked computations
Bubble-free pipeline overlaps computation with cache loading
Continuous batching reduces latency and improves GPU utilization
🔎 Similar Papers
No similar papers found.
X
Xiaoxiao Jiang
Hong Kong University of Science and Technology Alibaba Group
Suyi Li
Suyi Li
HKUST
Cloud ComputingMachine Learning SystemNatural Language Processing
Lingyun Yang
Lingyun Yang
Ph.D., Hong Kong University of Science and Technology
Machine Learning SystemsGPU Cluster Management
Tianyu Feng
Tianyu Feng
PhD Student, HKUST
machine learning systemslarge scale training
Z
Zhipeng Di
Hong Kong University of Science and Technology Alibaba Group
W
Weiyi Lu
Hong Kong University of Science and Technology Alibaba Group
G
Guoxuan Zhu
Hong Kong University of Science and Technology Alibaba Group
X
Xiu Lin
Hong Kong University of Science and Technology Alibaba Group
K
Kan Liu
Hong Kong University of Science and Technology Alibaba Group
Yinghao Yu
Yinghao Yu
Engineer, Alibaba
Resource management in containerized clustersGeneration optimizations for distributed systems
T
Tao Lan
Hong Kong University of Science and Technology Alibaba Group
G
Guodong Yang
Hong Kong University of Science and Technology Alibaba Group
L
Lin Qu
Hong Kong University of Science and Technology Alibaba Group
L
Liping Zhang
Hong Kong University of Science and Technology Alibaba Group
W
Wei Wang
Hong Kong University of Science and Technology Alibaba Group