Autellix: An Efficient Serving Engine for LLM Agents as General Programs

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM serving systems treat agent programs as stateless request streams, ignoring internal call dependencies and cross-program contextual relationships—leading to severe head-of-line blocking and cumulative latency. This paper proposes an inference-serving engine tailored for LLM-based agent programs, introducing the first context-aware scheduling mechanism that treats *programs*—not individual requests—as the fundamental scheduling unit. We design dual-mode schedulers: a single-threaded variant for low-overhead coordination and a distributed variant for scalability, both supporting execution-history–guided LLM call preemption and dynamic priority adjustment. By modeling program dependencies, implementing low-latency request interception, and adopting a vLLM-compatible architecture, our system achieves 4–15× higher program throughput than vLLM under equivalent latency constraints across multi-model and heterogeneous agent workloads. This work establishes the first fine-grained, program-level LLM service scheduler.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM.
Problem

Research questions and friction points this paper is trying to address.

Addressing dependencies between LLM programs and calls for optimization
Reducing cumulative wait times from head-of-line blocking in LLM systems
Minimizing end-to-end latency by treating agentic programs as first-class citizens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Treats programs as first-class citizens to minimize latency
Enriches schedulers with program-level intercepted LLM call context
Implements preemptive scheduling based on completed program calls
🔎 Similar Papers
No similar papers found.