🤖 AI Summary
To address high redundant computation, large message latency, and slow end-to-end execution in multi-agent large language model (LLM) workflows, this paper proposes Prompt Choreography—a novel framework centered on the first dynamic global KV cache mechanism. This mechanism enables cross-agent and cross-invocation reuse of message encodings and attention redirection. Integrated with cache-aware fine-tuning and message-level attention subset selection, it preserves semantic consistency. Additionally, a parallel LLM invocation scheduling strategy is designed to maximize throughput. Experiments demonstrate a 2.0–6.2× reduction in first-token latency and over 2.2× speedup in end-to-end workflow execution—particularly pronounced under high-redundancy conditions. This work pioneers the application of dynamic KV caching to multi-agent collaborative reasoning, establishing a new paradigm for efficient LLM-based workflow orchestration.
📝 Abstract
Large language models are increasingly deployed in multi-agent workflows. We introduce Prompt Choreography, a framework that efficiently executes LLM workflows by maintaining a dynamic, global KV cache. Each LLM call can attend to an arbitrary, reordered subset of previously encoded messages. Parallel calls are supported. Though caching messages' encodings sometimes gives different results from re-encoding them in a new context, we show in diverse settings that fine-tuning the LLM to work with the cache can help it mimic the original results. Prompt Choreography significantly reduces per-message latency (2.0--6.2$ imes$ faster time-to-first-token) and achieves substantial end-to-end speedups ($>$2.2$ imes$) in some workflows dominated by redundant computation.