MF-LLM: Simulating Collective Decision Dynamics via a Mean-Field Large Language Model Framework

๐Ÿ“… 2025-04-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM-based social simulations fail to capture the feedback loop between individual decision-making and collective dynamics, leading to significant deviations from real-world behavioral data. To address this, we propose Mean-Field LLMโ€”a novel framework that pioneers the integration of mean-field theory into LLM-driven social simulation. It explicitly models bidirectional microโ€“macro feedback via alternating rollouts between a strategic agent model and a mean-field population model. We further introduce IB-Tune, an information-bottleneck-driven fine-tuning method, to enhance predictive accuracy of population-level distributions while promoting representation parsimony. The framework adopts a domain-agnostic architecture for cross-domain generalization. Evaluated on real-world social datasets, it reduces KL divergence by 47% over baselines, substantially improving trend forecasting and intervention planning capabilities. It generalizes robustly across seven distinct domains and four major LLM backbones.

Technology Category

Application Category

๐Ÿ“ Abstract
Simulating collective decision-making involves more than aggregating individual behaviors; it arises from dynamic interactions among individuals. While large language models (LLMs) show promise for social simulation, existing approaches often exhibit deviations from real-world data. To address this gap, we propose the Mean-Field LLM (MF-LLM) framework, which explicitly models the feedback loop between micro-level decisions and macro-level population. MF-LLM alternates between two models: a policy model that generates individual actions based on personal states and group-level information, and a mean field model that updates the population distribution from the latest individual decisions. Together, they produce rollouts that simulate the evolving trajectories of collective decision-making. To better match real-world data, we introduce IB-Tune, a fine-tuning method for LLMs grounded in the information bottleneck principle, which maximizes the relevance of population distributions to future actions while minimizing redundancy with historical data. We evaluate MF-LLM on a real-world social dataset, where it reduces KL divergence to human population distributions by 47 percent over non-mean-field baselines, and enables accurate trend forecasting and intervention planning. It generalizes across seven domains and four LLM backbones, providing a scalable foundation for high-fidelity social simulation.
Problem

Research questions and friction points this paper is trying to address.

Simulating collective decision dynamics via mean-field LLM
Reducing deviations from real-world data in social simulations
Modeling micro-macro feedback loops in decision-making processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

MF-LLM models micro-macro feedback loops
IB-Tune fine-tunes LLMs via information bottleneck
Combines policy and mean-field models for simulation
๐Ÿ”Ž Similar Papers
No similar papers found.
Qirui Mi
Qirui Mi
Ph.d. student, Institute of Automation, Chinese Academy of Sciences
multi-agent systemreinforcement learningLLMcomputational economics
Mengyue Yang
Mengyue Yang
Lecturer, University of Bristol
CausalityTrustworthiness
X
Xiangning Yu
Tianjin University
Zhiyu Zhao
Zhiyu Zhao
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, Chinese Academy of Sciences
Cheng Deng
Cheng Deng
University of Edinburgh
On-device LLMNLPGeoAI
B
Bo An
Nanyang Technological University
H
Haifeng Zhang
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, Chinese Academy of Sciences
X
Xu Chen
Renmin University of China
J
Jun Wang
University College London