Humans and LLMs Diverge on Probabilistic Inferences

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates discrepancies between large language models (LLMs) and human judgment in non-deterministic probabilistic reasoning tasks. To this end, the authors introduce ProbCOPA, a dataset comprising 210 handcrafted samples, and conduct a systematic comparison between human annotations and eight state-of-the-art LLMs to analyze behavioral patterns in open-ended probabilistic inference. The work reveals, for the first time, that human probability judgments exhibit a continuous, gradient-like distribution, whereas LLM outputs significantly deviate from this distribution, lacking the diversity and fine-grained nuance characteristic of human reasoning. This divergence highlights fundamental limitations of current LLMs in handling uncertainty. Additionally, the study identifies shared simplification strategies employed across models, offering novel insights into the probabilistic reasoning mechanisms underlying large language models.

Technology Category

Application Category

📝 Abstract
Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given the premise. While reasoning LLMs have demonstrated strong performance on logical and mathematical tasks, their behavior on such open-ended, non-deterministic inferences remains largely unexplored. We introduce ProbCOPA, a dataset of 210 handcrafted probabilistic inferences in English, each annotated for inference likelihood by 25--30 human participants. We find that human responses are graded and varied, revealing probabilistic judgments of the inferences in our dataset. Comparing these judgments with responses from eight state-of-the-art reasoning LLMs, we show that models consistently fail to produce human-like distributions. Finally, analyzing LLM reasoning chains, we find evidence of a common reasoning pattern used to evaluate such inferences. Our findings reveal persistent differences between humans and LLMs, and underscore the need to evaluate reasoning beyond deterministic settings.
Problem

Research questions and friction points this paper is trying to address.

probabilistic inference
human reasoning
large language models
non-deterministic reasoning
reasoning evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

probabilistic reasoning
large language models
human-AI divergence
ProbCOPA dataset
non-deterministic inference
🔎 Similar Papers
No similar papers found.