Inject, Fork, Compare: Defining an Interaction Vocabulary for Multi-Agent Simulation Platforms

πŸ“… 2025-09-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-based multi-agent simulations lack intervenable interaction paradigms, hindering causal hypothesis testing and multi-trajectory comparative analysis. To address this, we propose three foundational interaction primitivesβ€”*injection*, *forking*, and *comparison*β€”to establish the first structured operational grammar for LLM multi-agent simulation. This grammar enables dynamic event injection, parallel timeline forking, and cross-trajectory multidimensional behavioral comparison. By elevating simulation from passive observation to active experimentation, our framework achieves the first controllable intervention in and causal attribution of emergent behaviors. Evaluated in a 14-agent commodity market simulation, the method successfully visualizes and dissects strategy evolution differences under distinct interventions, demonstrating its effectiveness and scalability for causal inquiry in complex adaptive systems. (149 words)

Technology Category

Application Category

πŸ“ Abstract
LLM-based multi-agent simulations are a rapidly growing field of research, but current simulations often lack clear modes for interaction and analysis, limiting the "what if" scenarios researchers are able to investigate. In this demo, we define three core operations for interacting with multi-agent simulations: inject, fork, and compare. Inject allows researchers to introduce external events at any point during simulation execution. Fork creates independent timeline branches from any timestamp, preserving complete state while allowing divergent exploration. Compare facilitates parallel observation of multiple branches, revealing how different interventions lead to distinct emergent behaviors. Together, these operations establish a vocabulary that transforms linear simulation workflows into interactive, explorable spaces. We demonstrate this vocabulary through a commodity market simulation with fourteen AI agents, where researchers can inject contrasting events and observe divergent outcomes across parallel timelines. By defining these fundamental operations, we provide a starting point for systematic causal investigation in LLM-based agent simulations, moving beyond passive observation toward active experimentation.
Problem

Research questions and friction points this paper is trying to address.

Defining interaction vocabulary for multi-agent simulations
Enabling systematic causal investigation in LLM simulations
Transforming linear workflows into interactive explorable spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inject external events during simulation
Fork independent timeline branches
Compare parallel branches for behavior analysis
πŸ”Ž Similar Papers
2024-03-04Proceedings of the 17th International Conference on Agents and Artificial IntelligenceCitations: 3