Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of optimizing end-to-end latency and task performance for multi-agent workflows in heterogeneous large language model (LLM) deployments, where existing systems often assume homogeneous clusters. To this end, we propose Chimera, the first system that jointly optimizes scheduling by integrating semantic routing, prediction of remaining output length in workflows, and congestion estimation based on in-flight tokens. Chimera leverages model confidence to guide semantic routing, dynamically predicts total output length, and performs load balancing using in-flight token counts. Experimental results on code generation and mathematical reasoning tasks demonstrate that Chimera reduces end-to-end latency by 1.2–2.4× and improves task performance by 8.0–9.5 percentage points compared to baselines such as vLLM.

Technology Category

Application Category

📝 Abstract
Multi-agent applications often execute complex tasks as multi-stage workflows, where each stage is an LLM call whose output becomes part of context for subsequent steps. Existing LLM serving systems largely assume homogeneous clusters with identical model replicas. This design overlooks the potential of heterogeneous deployments, where models of different sizes and capabilities enable finer trade-offs between latency and performance. However, heterogeneity introduces new challenges in scheduling across models with diverse throughput and performance. We present Chimera, a predictive scheduling system for multi-agent workflow serving on heterogeneous LLM clusters that jointly improves end-to-end latency and task performance. Chimera applies semantic routing to estimate per-model confidence scores for each request, predicts the total remaining output length of the workflow, and estimates per-model congestion using in-flight predicted token volumes for load balancing. We evaluate Chimera on representative agentic workflows for code generation and math reasoning using multiple heterogeneous LLM configurations. Across comparable settings, Chimera traces the best latency-performance frontier, reducing end-to-end latency by 1.2--2.4$\times$ and improving task performance by 8.0-9.5 percentage points on average over competitive baselines including vLLM.
Problem

Research questions and friction points this paper is trying to address.

heterogeneous LLMs
multi-agent serving
latency-performance trade-off
workflow scheduling
LLM serving
Innovation

Methods, ideas, or system contributions that make the work stand out.

heterogeneous LLM serving
multi-agent workflows
semantic routing
predictive scheduling
latency-performance trade-off
🔎 Similar Papers
No similar papers found.