🤖 AI Summary
This work addresses the limitation of existing large language model–driven social simulators, which predominantly focus on macroscopic population dynamics while neglecting the evolution of individuals’ internal states, thereby failing to capture opinion reversals driven by gradual changes over long time horizons. To overcome this, the authors propose the MF-MDP framework, which, for the first time in social simulation, explicitly models latent individual opinion states and their Markovian transition dynamics, coupling microscopic decision-making with macroscopic population behavior through mean-field theory. The approach substantially enhances long-horizon simulation stability, achieving 40,000 interaction rounds on real-world event data—over two orders of magnitude longer than the baseline MF-LLM—and reduces KL divergence by 75.3% overall and 66.9% specifically during opinion reversals, effectively mitigating simulation drift.
📝 Abstract
Social network simulation aims to model collective opinion dynamics in large populations, but existing LLM-based simulators mainly focus on aggregate dynamics while largely ignoring individual internal states. This limits their ability to capture opinion reversals driven by gradual individual shifts and makes them unreliable in long-horizon simulations. We propose MF-MDP, a social simulation framework that tightly couples macro-level collective dynamics with micro-level individual states. MF-MDP explicitly models per-agent latent opinion states with a state transition mechanism, combining individual Markov Decision Processes at the micro level with a mean-field collective framework at the macro level. This allows individual behaviors to change internal states gradually rather than trigger instant reactions, enabling the simulator to distinguish agents that are close to switching from those that are far from switching, capture opinion reversals, and maintain accuracy over long horizons. Across real-world events, MF-MDP supports stable simulation of long-horizon social processes with up to 40,000 interactions, compared with about 300 in the baseline MF-LLM, while reducing long-horizon KL divergence by 75.3% (1.2490 to 0.3089) and reversal KL by 66.9% (1.6425 to 0.5434), significantly mitigating the drift observed in MF-LLM. Code is available at github.com/AI4SS/MF-MDP.