🤖 AI Summary
Online serving of any-to-any multimodal models faces significant challenges due to extreme heterogeneity in request types, computation paths, and resource requirements. Method: This paper proposes the first automated deployment planning framework supporting generic computational graph modeling, enabling model-component-level dynamic decoupling, heterogeneous hardware-aware scheduling, and distributed runtime co-optimization. Contribution/Results: The framework systematically addresses the service challenge of arbitrary combinations of multimodal inputs (text, image, video, audio) to multimodal outputs. Evaluated against state-of-the-art baselines, it achieves up to a 3.81× throughput improvement and up to a 5.79× reduction in P99 latency, significantly enhancing service efficiency and scalability while maintaining end-to-end correctness and quality.
📝 Abstract
We present Cornserve, an efficient online serving system for an emerging class of multimodal models called Any-to-Any models. Any-to-Any models accept combinations of text and multimodal data (e.g., image, video, audio) as input and also generate combinations of text and multimodal data as output, introducing request type, computation path, and computation scaling heterogeneity in model serving.
Cornserve allows model developers to describe the computation graph of generic Any-to-Any models, which consists of heterogeneous components such as multimodal encoders, autoregressive models like Large Language Models (LLMs), and multimodal generators like Diffusion Transformers (DiTs). Given this, Cornserve's planner automatically finds an optimized deployment plan for the model, including whether and how to disaggregate the model into smaller components based on model and workload characteristics. Cornserve's distributed runtime then executes the model per the plan, efficiently handling Any-to-Any model heterogeneity during online serving. Evaluations show that Cornserve can efficiently serve diverse Any-to-Any models and workloads, delivering up to 3.81$ imes$ throughput improvement and up to 5.79$ imes$ tail latency reduction over existing solutions.