Efficiently Serving Large Multimodal Models Using EPD Disaggregation

πŸ“… 2024-12-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large multimodal models (LMMs) suffer from high computational and memory overhead during multimodal encoding, leading to degraded service-level objectives (SLOs)β€”notably increased time-to-first-token (TTFT) and reduced throughput. To address this, we propose Encode-Prefill-Decode (EPD), a novel three-stage decoupled serving architecture. EPD isolates encoding, prefill, and decoding onto dedicated hardware resources and integrates multimedia token caching and reuse, intra-request parallelized encoding, KV cache sharding, dynamic resource scheduling, and load-aware role switching. Experiments demonstrate that EPD reduces memory utilization by 15Γ—, increases batch size by 22Γ—, supports up to 10Γ— more images per request, and expands effective KV cache capacity by 2.2Γ—. Furthermore, TTFT decreases by 71%, and end-to-end latency drops by 57%. EPD establishes a scalable, low-overhead systems paradigm for efficient LMM serving.

Technology Category

Application Category

πŸ“ Abstract
Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively impacting key Service Level Objectives (SLOs) like time to first token (TTFT) and end-to-end throughput (E2ETP). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouple these steps unlocking new opportunities and optimizations. These include a new mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize encoding load within a request, a module to find the optimal resource allocation for disaggregated serving, and a novel role switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15$ imes$ less utilization), batch sizes (up to 22$ imes$ larger), 10$ imes$ more images/request, and 2.2$ imes$ larger KV caches. Further, it leads to significant improvements in latency metrics (TTFT up to 71% reduction) and end-to-end throughput (up to 57% reduction), compared to systems that do not disaggregate.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational and memory overhead in Large Multimodal Models.
Improves time to first token and end-to-end throughput.
Optimizes resource allocation for encoding, prefill, and decode stages.
Innovation

Methods, ideas, or system contributions that make the work stand out.

EPD Disaggregation separates encoding stages
Caches multimedia tokens for efficient transfer
Parallelizes encoding load within requests