🤖 AI Summary
This work addresses the severe memory bottleneck in large-scale LLM-based multi-agent simulations, where each agent must maintain private GPU state—including model weights, prefix caches, and adapters—hindering scalability. To overcome this challenge, the authors propose ScaleSim, a novel system that introduces “call distance” as a unified abstraction to capture the sparsity of agent activations and the predictability of their invocation order in multi-agent simulations. Leveraging this insight, ScaleSim implements a priority-based memory eviction and proactive prefetching mechanism, integrated with a modular memory interface to enable efficient memory scheduling. Experimental results demonstrate that ScaleSim achieves up to 1.74× speedup over SGLang on simulation benchmarks, significantly enhancing both the scalability and runtime efficiency of large-scale multi-agent simulations.
📝 Abstract
LLM-based multi-agent simulations are increasingly adopted across application domains, but remain difficult to scale due to GPU memory pressure. Each agent maintains private GPU-resident states, including models, prefix caches, and adapters, which quickly exhaust device memory as the agent count grows. We identify two key properties of these workloads: sparse agent activation and an estimable agent invocation order. Based on an analysis of representative workload classes, we introduce invocation distance, a unified abstraction that estimates the relative order in which agents will issue future LLM requests. Leveraging this abstraction, we present ScaleSim, a memory-efficient LLM serving system for large-scale multi-agent simulations. ScaleSim enables proactive prefetching and priority-based eviction, supports diverse agent-specific memory through a modular interface, and achieves up to 1.74x speedup over SGLang on simulation benchmarks.