🤖 AI Summary
This work proposes a novel generative agent-based modeling (ABM) paradigm centered on population-level reasoning to overcome the limitations of both large language model (LLM)-driven multi-agent systems—which suffer from poor scalability and lack temporal alignment in state calibration—and classical ABMs, which struggle to integrate rich individual-level signals and non-stationary behaviors. The approach leverages an ANCHOR clustering strategy to identify behaviorally coherent agent clusters and combines symbolic state-specific agents, a multimodal neural transition model, and an uncertainty-aware knowledge fusion mechanism. This design significantly reduces LLM invocation while improving simulation fidelity. Evaluated on public health, financial, and social science tasks, the method consistently outperforms mechanistic models, purely neural approaches, and LLM baselines in both event timing accuracy and calibration performance.
📝 Abstract
Large language model (LLM)-based multi-agent systems enable expressive agent reasoning but are expensive to scale and poorly calibrated for timestep-aligned state-transition simulation, while classical agent-based models (ABMs) offer interpretability but struggle to integrate rich individual-level signals and non-stationary behaviors. We propose PhysicsAgentABM, which shifts inference to behaviorally coherent agent clusters: state-specialized symbolic agents encode mechanistic transition priors, a multimodal neural transition model captures temporal and interaction dynamics, and uncertainty-aware epistemic fusion yields calibrated cluster-level transition distributions. Individual agents then stochastically realize transitions under local constraints, decoupling population inference from entity-level variability. We further introduce ANCHOR, an LLM agent-driven clustering strategy based on cross-contextual behavioral responses and a novel contrastive loss, reducing LLM calls by up to 6-8 times. Experiments across public health, finance, and social sciences show consistent gains in event-time accuracy and calibration over mechanistic, neural, and LLM baselines. By re-architecting generative ABM around population-level inference with uncertainty-aware neuro-symbolic fusion, PhysicsAgentABM establishes a new paradigm for scalable and calibrated simulation with LLMs.