Accelerating Language Model Workflows with Prompt Choreography

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high redundant computation, large message latency, and slow end-to-end execution in multi-agent large language model (LLM) workflows, this paper proposes Prompt Choreography—a novel framework centered on the first dynamic global KV cache mechanism. This mechanism enables cross-agent and cross-invocation reuse of message encodings and attention redirection. Integrated with cache-aware fine-tuning and message-level attention subset selection, it preserves semantic consistency. Additionally, a parallel LLM invocation scheduling strategy is designed to maximize throughput. Experiments demonstrate a 2.0–6.2× reduction in first-token latency and over 2.2× speedup in end-to-end workflow execution—particularly pronounced under high-redundancy conditions. This work pioneers the application of dynamic KV caching to multi-agent collaborative reasoning, establishing a new paradigm for efficient LLM-based workflow orchestration.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly deployed in multi-agent workflows. We introduce Prompt Choreography, a framework that efficiently executes LLM workflows by maintaining a dynamic, global KV cache. Each LLM call can attend to an arbitrary, reordered subset of previously encoded messages. Parallel calls are supported. Though caching messages' encodings sometimes gives different results from re-encoding them in a new context, we show in diverse settings that fine-tuning the LLM to work with the cache can help it mimic the original results. Prompt Choreography significantly reduces per-message latency (2.0--6.2$ imes$ faster time-to-first-token) and achieves substantial end-to-end speedups ($>$2.2$ imes$) in some workflows dominated by redundant computation.
Problem

Research questions and friction points this paper is trying to address.

Reduces latency in multi-agent LLM workflows
Optimizes redundant computations via dynamic caching
Enables parallel processing with reordered message subsets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic global KV cache for efficient LLM workflows
Arbitrary reordered subset attention in LLM calls
Fine-tuning LLM to work with cache for original results
🔎 Similar Papers
No similar papers found.