Prospect Theory Fails for LLMs: Revealing Instability of Decision-Making under Epistemic Uncertainty

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether prospect theory—a foundational model of human decision-making under uncertainty—adequately characterizes the behavioral responses of large language models (LLMs) to cognitive uncertainty, particularly when confronted with natural-language uncertainty markers such as “possible.” Method: We propose a three-stage economic questionnaire framework integrating probabilistic mapping and cognitive marker injection to systematically assess LLMs’ responses to diverse linguistic uncertainty expressions under controlled conditions. Contribution/Results: Our empirical evaluation reveals that prospect theory exhibits significant instability in LLMs: neither the value function nor the probability weighting function is consistently replicable, especially across heterogeneous linguistic uncertainty formulations. This work introduces the first integrated assessment framework linking cognitive uncertainty and decision behavior in LLMs; uncovers how surface-level linguistic form critically undermines standard rationality assumptions; and publicly releases all code and data to establish a reproducible benchmark for AI rationality modeling.

Technology Category

Application Category

📝 Abstract
Prospect Theory (PT) models human decision-making under uncertainty, while epistemic markers (e.g., maybe) serve to express uncertainty in language. However, it remains largely unexplored whether Prospect Theory applies to contemporary Large Language Models and whether epistemic markers, which express human uncertainty, affect their decision-making behaviour. To address these research gaps, we design a three-stage experiment based on economic questionnaires. We propose a more general and precise evaluation framework to model LLMs' decision-making behaviour under PT, introducing uncertainty through the empirical probability values associated with commonly used epistemic markers in comparable contexts. We then incorporate epistemic markers into the evaluation framework based on their corresponding probability values to examine their influence on LLM decision-making behaviours. Our findings suggest that modelling LLMs' decision-making with PT is not consistently reliable, particularly when uncertainty is expressed in diverse linguistic forms. Our code is released in https://github.com/HKUST-KnowComp/MarPT.
Problem

Research questions and friction points this paper is trying to address.

Tests Prospect Theory applicability to LLM decision-making
Examines epistemic markers' influence on LLM uncertainty behavior
Reveals instability in LLM decision-making under linguistic uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirical probability values for epistemic markers
Three-stage experiment with economic questionnaires
General evaluation framework for decision-making
🔎 Similar Papers
No similar papers found.