Rethinking LLM Human Simulation: When a Graph is What You Need

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low efficiency and poor interpretability of large language models (LLMs) in modeling human discrete choice behavior. To this end, we propose a novel paradigm that replaces LLMs with lightweight graph neural networks (GNNs). Our core method uniformly formulates discrete choice tasks as link prediction problems on structured graphs, enabling joint modeling of linguistic representations and relational dependencies. We introduce GEMS—a Graph-based Embedding and Modeling framework—that establishes, for the first time, a unified graph-relational learning paradigm for discrete choice simulation. Evaluated on three synthetic datasets, GEMS achieves accuracy comparable to or exceeding that of strong LLM baselines, while employing fewer than 0.1% of their parameters. Moreover, it delivers substantial improvements in inference speed and decision transparency, thereby simultaneously achieving high performance, model compactness, and interpretability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used to simulate humans, with applications ranging from survey prediction to decision-making. However, are LLMs strictly necessary, or can smaller, domain-grounded models suffice? We identify a large class of simulation problems in which individuals make choices among discrete options, where a graph neural network (GNN) can match or surpass strong LLM baselines despite being three orders of magnitude smaller. We introduce Graph-basEd Models for human Simulation (GEMS), which casts discrete choice simulation tasks as a link prediction problem on graphs, leveraging relational knowledge while incorporating language representations only when needed. Evaluations across three key settings on three simulation datasets show that GEMS achieves comparable or better accuracy than LLMs, with far greater efficiency, interpretability, and transparency, highlighting the promise of graph-based modeling as a lightweight alternative to LLMs for human simulation. Our code is available at https://github.com/schang-lab/gems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating whether smaller graph models can replace LLMs for human simulation tasks
Simulating human discrete choices using graph neural networks as link prediction
Developing efficient graph-based alternatives to resource-intensive LLM simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph neural networks simulate human choices efficiently
GEMS casts simulation as link prediction on graphs
Graph-based modeling offers lightweight alternative to LLMs
🔎 Similar Papers
2024-10-06Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)Citations: 13
J
Joseph Suh
University of California, Berkeley
Suhong Moon
Suhong Moon
UC Berkeley
S
Serina Chang
University of California, Berkeley