ScaleSim: Serving Large-Scale Multi-Agent Simulation with Invocation Distance-Based Memory Management

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe memory bottleneck in large-scale LLM-based multi-agent simulations, where each agent must maintain private GPU state—including model weights, prefix caches, and adapters—hindering scalability. To overcome this challenge, the authors propose ScaleSim, a novel system that introduces “call distance” as a unified abstraction to capture the sparsity of agent activations and the predictability of their invocation order in multi-agent simulations. Leveraging this insight, ScaleSim implements a priority-based memory eviction and proactive prefetching mechanism, integrated with a modular memory interface to enable efficient memory scheduling. Experimental results demonstrate that ScaleSim achieves up to 1.74× speedup over SGLang on simulation benchmarks, significantly enhancing both the scalability and runtime efficiency of large-scale multi-agent simulations.

Technology Category

Application Category

📝 Abstract
LLM-based multi-agent simulations are increasingly adopted across application domains, but remain difficult to scale due to GPU memory pressure. Each agent maintains private GPU-resident states, including models, prefix caches, and adapters, which quickly exhaust device memory as the agent count grows. We identify two key properties of these workloads: sparse agent activation and an estimable agent invocation order. Based on an analysis of representative workload classes, we introduce invocation distance, a unified abstraction that estimates the relative order in which agents will issue future LLM requests. Leveraging this abstraction, we present ScaleSim, a memory-efficient LLM serving system for large-scale multi-agent simulations. ScaleSim enables proactive prefetching and priority-based eviction, supports diverse agent-specific memory through a modular interface, and achieves up to 1.74x speedup over SGLang on simulation benchmarks.
Problem

Research questions and friction points this paper is trying to address.

multi-agent simulation
GPU memory pressure
large-scale LLM serving
memory management
Innovation

Methods, ideas, or system contributions that make the work stand out.

invocation distance
multi-agent simulation
LLM serving
memory management
proactive prefetching
🔎 Similar Papers
No similar papers found.