The Bayesian Origin of the Probability Weighting Function in Human Representation of Probabilities

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the classic cognitive bias wherein humans systematically distort probability perception—overweighting small probabilities and underweighting medium-to-high ones. Method: Grounded in a Bayesian optimal decoding framework, we propose that the probability weighting function arises not from heuristic or erroneous processing, but from rational inference over noisy neural population codes. We integrate stochastic neural encoding with Bayesian inference to derive a computationally tractable model that predicts how probability weights adapt to prior distributions (e.g., bimodal priors). Contribution/Results: Our theory provides the first unified, normative account of the probability weighting function rooted in rational inference under biological constraints. It successfully fits individual-level choice data from two behavioral paradigms—lottery decisions and dot-counting estimation—with significantly higher accuracy than descriptive benchmarks (e.g., Prospect Theory). These results reveal a neurocognitively grounded mechanism for probability representation that balances statistical optimality with neural noise limitations.

Technology Category

Application Category

📝 Abstract
Understanding the representation of probability in the human mind has been of great interest to understanding human decision making. Classical paradoxes in decision making suggest that human perception distorts probability magnitudes. Previous accounts postulate a Probability Weighting Function that transforms perceived probabilities; however, its motivation has been debated. Recent work has sought to motivate this function in terms of noisy representations of probabilities in the human mind. Here, we present an account of the Probability Weighting Function grounded in rational inference over optimal decoding from noisy neural encoding of quantities. We show that our model accurately accounts for behavior in a lottery task and a dot counting task. It further accounts for adaptation to a bimodal short-term prior. Taken together, our results provide a unifying account grounding the human representation of probability in rational inference.
Problem

Research questions and friction points this paper is trying to address.

Explains Bayesian origin of probability weighting function
Models human probability representation via neural encoding
Accounts for behavioral adaptation to prior distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian model explains probability weighting function
Optimal decoding from noisy neural encoding
Accounts for behavior in lottery and dot tasks
🔎 Similar Papers
No similar papers found.
X
Xin Tong
Saarland University
T
Thi Thu Uyen Hoang
Saarland University
Xue-Xin Wei
Xue-Xin Wei
Department of Neuroscience, Department of Psychology, UT Austin
Theoretical neuroscienceComputational neuroscienceVisionCognitionArtificial intelligence
M
Michael Hahn
Saarland University