🤖 AI Summary
This study addresses the classic cognitive bias wherein humans systematically distort probability perception—overweighting small probabilities and underweighting medium-to-high ones.
Method: Grounded in a Bayesian optimal decoding framework, we propose that the probability weighting function arises not from heuristic or erroneous processing, but from rational inference over noisy neural population codes. We integrate stochastic neural encoding with Bayesian inference to derive a computationally tractable model that predicts how probability weights adapt to prior distributions (e.g., bimodal priors).
Contribution/Results: Our theory provides the first unified, normative account of the probability weighting function rooted in rational inference under biological constraints. It successfully fits individual-level choice data from two behavioral paradigms—lottery decisions and dot-counting estimation—with significantly higher accuracy than descriptive benchmarks (e.g., Prospect Theory). These results reveal a neurocognitively grounded mechanism for probability representation that balances statistical optimality with neural noise limitations.
📝 Abstract
Understanding the representation of probability in the human mind has been of great interest to understanding human decision making. Classical paradoxes in decision making suggest that human perception distorts probability magnitudes. Previous accounts postulate a Probability Weighting Function that transforms perceived probabilities; however, its motivation has been debated. Recent work has sought to motivate this function in terms of noisy representations of probabilities in the human mind. Here, we present an account of the Probability Weighting Function grounded in rational inference over optimal decoding from noisy neural encoding of quantities. We show that our model accurately accounts for behavior in a lottery task and a dot counting task. It further accounts for adaptation to a bimodal short-term prior. Taken together, our results provide a unifying account grounding the human representation of probability in rational inference.