Orla: A Library for Serving LLM-Based Multi-Agent Systems

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of developing multi-agent systems with large language models (LLMs), which currently rely on manual orchestration of inference, tool invocation, and infrastructure, lacking a unified abstraction to decouple workflow policies from execution logic. To bridge this gap, we propose Orla, a library that introduces a unified service layer atop inference engines such as vLLM. Orla employs a three-tiered architecture—comprising phase mappers, workflow coordinators, and memory managers—to enable cross-model and cross-backend phase assignment, context-aware resource scheduling, and boundary-spanning KV cache management. Evaluation in a customer service scenario demonstrates that Orla significantly reduces latency and cost compared to a single-model vLLM baseline, while workflow-level caching substantially accelerates time-to-first-token generation.

Technology Category

Application Category

📝 Abstract
We introduce Orla, a library for constructing and running LLM-based agentic systems. Modern agentic applications consist of workflows that combine multiple LLM inference steps, tool calls, and heterogeneous infrastructure. Today, developers typically build these systems by manually composing orchestration code with LLM serving engines and tool execution logic. Orla provides a general abstraction that separates request execution from workflow-level policy. It acts as a serving layer above existing LLM inference engines: developers define workflows composed of stages, while Orla manages how those stages are mapped, executed, and coordinated across models and backends. It provides agent-level control through three mechanisms: a stage mapper, which assigns each stage to an appropriate model and backend; a workflow orchestrator, which schedules stages and manages their resources and context; and a memory manager, which manages inference state such as the KV cache across workflow boundaries. We demonstrate Orla with a customer support workflow that exercises many of its capabilities. We evaluate Orla on two datasets, showing that stage mapping improves latency and cost compared to a single-model vLLM baseline, while workflow-level cache management reduces time-to-first-token.
Problem

Research questions and friction points this paper is trying to address.

LLM-based multi-agent systems
workflow orchestration
serving infrastructure
stage mapping
inference state management
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based multi-agent systems
workflow orchestration
stage mapping
KV cache management
inference serving
🔎 Similar Papers
No similar papers found.