Large Language Models Assume People are More Rational than We Really are

📅 2024-06-24
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit a systematic “rationality bias” when simulating human decision-making—implicitly relying excessively on rational choice theory and expected value maximization, thereby diverging significantly from empirically observed human behavior. Method: We conduct the first large-scale empirical evaluation using behavioral datasets (e.g., Risky Choice, Intention Inference), benchmarking across architectures (GPT-4o/4-Turbo, Llama-3, Claude 3), complemented by controlled behavioral experiments and psychometric assessments. Contribution/Results: We demonstrate that this bias is pervasive and closely aligns with human attribution biases. Correcting for rational assumptions substantially improves prediction accuracy; remarkably, uncorrected models still correlate strongly with human responses (r > 0.85), suggesting shared cognitive shortcuts rooted in rationality assumptions. Our work establishes a novel benchmark for evaluating the behavioral fidelity and cognitive alignment of LLMs in human-centered modeling.

Technology Category

Application Category

📝 Abstract
In order for AI systems to communicate effectively with people, they must understand how we make decisions. However, people's decisions are not always rational, so the implicit internal models of human decision-making in Large Language Models (LLMs) must account for this. Previous empirical evidence seems to suggest that these implicit models are accurate -- LLMs offer believable proxies of human behavior, acting how we expect humans would in everyday interactions. However, by comparing LLM behavior and predictions to a large dataset of human decisions, we find that this is actually not the case: when both simulating and predicting people's choices, a suite of cutting-edge LLMs (GPT-4o&4-Turbo, Llama-3-8B&70B, Claude 3 Opus) assume that people are more rational than we really are. Specifically, these models deviate from human behavior and align more closely with a classic model of rational choice -- expected value theory. Interestingly, people also tend to assume that other people are rational when interpreting their behavior. As a consequence, when we compare the inferences that LLMs and people draw from the decisions of others using another psychological dataset, we find that these inferences are highly correlated. Thus, the implicit decision-making models of LLMs appear to be aligned with the human expectation that other people will act rationally, rather than with how people actually act.
Problem

Research questions and friction points this paper is trying to address.

LLMs overestimate human rationality in decision-making.
LLMs align more with rational choice theory than actual human behavior.
LLMs and humans both assume others act rationally.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs simulate human decisions using rational models.
LLMs align with expected value theory assumptions.
LLMs reflect human biases towards rational behavior.
🔎 Similar Papers
No similar papers found.