DiffServe: Efficiently Serving Text-to-Image Diffusion Models with Query-Aware Model Scaling

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion model serving faces high computational overhead and dynamic, heterogeneous query workloads. Method: This paper proposes a query-aware dynamic model scaling framework that constructs a lightweight–high-fidelity cascaded inference architecture. It innovatively drives end-to-end automatic cascade modeling and dynamic routing based on query difficulty, enabling on-demand scheduling of multi-scale diffusion model variants while preserving generation quality. Technically, it integrates query complexity prediction, knowledge distillation, cascaded inference scheduling, and adaptive resource allocation. Contribution/Results: Experiments show that, compared to state-of-the-art serving systems, our approach improves response quality by up to 24%, reduces latency violation rates by 19–70%, and significantly enhances throughput and tail-latency performance.

Technology Category

Application Category

📝 Abstract
Text-to-image generation using diffusion models has gained increasing popularity due to their ability to produce high-quality, realistic images based on text prompts. However, efficiently serving these models is challenging due to their computation-intensive nature and the variation in query demands. In this paper, we aim to address both problems simultaneously through query-aware model scaling. The core idea is to construct model cascades so that easy queries can be processed by more lightweight diffusion models without compromising image generation quality. Based on this concept, we develop an end-to-end text-to-image diffusion model serving system, DiffServe, which automatically constructs model cascades from available diffusion model variants and allocates resources dynamically in response to demand fluctuations. Our empirical evaluations demonstrate that DiffServe achieves up to 24% improvement in response quality while maintaining 19-70% lower latency violation rates compared to state-of-the-art model serving systems.
Problem

Research questions and friction points this paper is trying to address.

Efficiently serving computation-intensive diffusion models
Handling varying query demands in text-to-image generation
Balancing image quality and latency in model serving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-aware model scaling for efficient serving
Model cascades for lightweight query processing
Dynamic resource allocation for demand fluctuations
🔎 Similar Papers
No similar papers found.