🤖 AI Summary
This work addresses the challenges of efficient inference in Any-to-Any multimodal models, which arise from the diverse combinations of input and output modalities, heterogeneous computational paths, and varying scalability requirements across components. To tackle these issues, the authors propose Cornserve, a distributed serving system tailored for such models. Cornserve employs a flexible task abstraction to express computation graphs, enabling component-level decoupling and independent scaling. It introduces a record-replay execution model combined with a direct tensor transfer mechanism from producers to consumers, optimizing both computation scheduling and data dependency-aware communication. Built on Kubernetes, Cornserve supports arbitrary modality combinations and demonstrates up to a 3.81× improvement in throughput and a 5.79× reduction in tail latency.
📝 Abstract
Any-to-Any models are an emerging class of multimodal models that accept combinations of multimodal data (e.g., text, image, video, audio) as input and generate them as output. Serving these models are challenging; different requests with different input and output modalities traverse different paths through the model computation graph, and each component of the model have different scaling characteristics.
We present Cornserve, a distributed serving system for generic Any-to-Any models. Cornserve provides a flexible task abstraction for expressing Any-to-Any model computation graphs, enabling component disaggregation and independent scaling. The distributed runtime dispatches compute to the data plane via an efficient record-and-replay execution model that keeps track of data dependencies, and forwards tensor data between components directly from the producer to the consumer. Built on Kubernetes with approximately 23K new lines of Python, Cornserve supports diverse Any-to-Any models and delivers up to 3.81$\times$ higher throughput and 5.79$\times$ lower tail latency. Cornserve is open-source, and the demo video is available on YouTube.