🤖 AI Summary
Text-to-image diffusion models incur high inference costs, hindering real-time deployment. Existing cascaded serving systems employ static configurations, mandating all queries to pass through a lightweight model for initial screening—wasting GPU resources on semantically complex prompts. This work proposes a hybrid adaptive serving system: (1) a rule-driven prompt router that directly routes semantically complex queries to heavyweight models; (2) an offline performance analysis to construct a Pareto-optimal configuration table, enabling runtime-dynamic cascade decisions; and (3) joint optimization of query routing, model selection, and GPU resource allocation. Experiments under realistic workloads demonstrate that, compared to baseline systems, our approach improves response quality by up to 35% and reduces latency violation rates by 2.7×–45×.
📝 Abstract
Text-to-image diffusion models have achieved remarkable visual quality but incur high computational costs, making real-time, scalable deployment challenging. Existing query-aware serving systems mitigate the cost by cascading lightweight and heavyweight models, but most rely on a fixed cascade configuration and route all prompts through an initial lightweight stage, wasting resources on complex queries. We present HADIS, a hybrid adaptive diffusion model serving system that jointly optimizes cascade model selection, query routing, and resource allocation. HADIS employs a rule-based prompt router to send clearly hard queries directly to heavyweight models, bypassing the overhead of the lightweight stage. To reduce the complexity of resource management, HADIS uses an offline profiling phase to produce a Pareto-optimal cascade configuration table. At runtime, HADIS selects the best cascade configuration and GPU allocation given latency and workload constraints. Empirical evaluations on real-world traces demonstrate that HADIS improves response quality by up to 35% while reducing latency violation rates by 2.7-45$ imes$ compared to state-of-the-art model serving systems.