🤖 AI Summary
This work proposes a novel approach to enhance the realism of large language models (LLMs) in simulating human behavior in social dilemma games by explicitly modeling identity-driven actions and context-dependent decision-making. Moving beyond conventional weak role prompts, the method deeply integrates narrative-rich identity profiles with instruction tuning and a consistency verification mechanism to construct a robust social dilemma simulation framework. Experimental results demonstrate that the proposed framework successfully replicates key empirical findings from human studies regarding the influence of identity and contextual factors—such as time pressure, problem framing, and group composition—on strategic choices. This advancement significantly improves the granularity, fidelity, and reproducibility of simulated social behaviors in computational models.
📝 Abstract
Humans act via a nuanced process that depends both on rational deliberation and also on identity and contextual factors. In this work, we study how large language models (LLMs) can simulate human action in the context of social dilemma games. While prior work has focused on"steering"(weak binding) of chat models to simulate personas, we analyze here how deep binding of base models with extended backstories leads to more faithful replication of identity-based behaviors. Our study has these findings: simulation fidelity vs human studies is improved by conditioning base LMs with rich context of narrative identities and checking consistency using instruction-tuned models. We show that LLMs can also model contextual factors such as time (year that a study was performed), question framing, and participant pool effects. LLMs, therefore, allow us to explore the details that affect human studies but which are often omitted from experiment descriptions, and which hamper accurate replication.