Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of risk decision-making behavior in large language models (LLMs) under uncertainty. Through a behavioral experimental paradigm, it compares choices made by 20 state-of-the-art LLMs under both explicit and experiential prospect representations, analyzing results alongside human participant data and rational agent models. The work identifies and formally defines two distinct model types: “reasoning-oriented” models, which approximate rational decision-making and show insensitivity to prospect order, gain–loss framing, and justification; and “dialogue-oriented” models, which exhibit human-like irrational biases and a pronounced description–experience gap (DH Gap). This research provides a novel classification framework and empirical foundation for understanding the decision mechanisms underlying large language models.

Technology Category

Application Category

📝 Abstract
The use of large language models either as decision support systems, or in agentic workflows, is rapidly transforming the digital ecosystem. However, the understanding of LLM decision-making under uncertainty remains limited. We initiate a comparative study of LLM risky choices along two dimensions: (1) prospect representation (explicit vs. experience based) and (2) decision rationale (explanation). Our study, which involves 20 frontier and open LLMs, is complemented by a matched human subjects experiment, which provides one reference point, while an expected payoff maximizing rational agent model provides another. We find that LLMs cluster into two categories: reasoning models (RMs) and conversational models (CMs). RMs tend towards rational behavior, are insensitive to the order of prospects, gain/loss framing, and explanations, and behave similarly whether prospects are explicit or presented via experience history. CMs are significantly less rational, slightly more human-like, sensitive to prospect ordering, framing, and explanation, and exhibit a large description-history gap. Paired comparisons of open LLMs suggest that a key factor differentiating RMs and CMs is training for mathematical reasoning.
Problem

Research questions and friction points this paper is trying to address.

large language models
risky decision-making
uncertainty
description-history gap
risk preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning models
conversational models
description-history gap
risky choice
mathematical reasoning
🔎 Similar Papers
No similar papers found.