🤖 AI Summary
This paper addresses the longstanding trade-off between interpretability and behavioral fidelity in risk-decision modeling by introducing the first symbolically grounded, interpretable prospect theory model. Methodologically, it replaces black-box utility and probability weighting functions with explicit symbolic representations of core psychological mechanisms—such as loss aversion and framing effects—formalized as effect-size-driven symbolic features; these are then mapped to interpretable, psychologically meaningful parameters via symbolic regression and mathematical formalization. Contributions include: (1) achieving predictive performance on par with state-of-the-art black-box models while preserving strict adherence to behavioral theory; (2) accurately reproducing canonical risk-preference phenomena (e.g., reflection effect, certainty effect) on synthetic data; and (3) ensuring all model parameters possess unambiguous, theory-grounded psychological interpretations—thereby bridging the gap between behavioral decision theory and interpretable machine learning.
📝 Abstract
We propose a novel symbolic modeling framework for decision-making under risk that merges interpretability with the core insights of Prospect Theory. Our approach replaces opaque utility curves and probability weighting functions with transparent, effect-size-guided features. We mathematically formalize the method, demonstrate its ability to replicate well-known framing and loss-aversion phenomena, and provide an end-to-end empirical validation on synthetic datasets. The resulting model achieves competitive predictive performance while yielding clear coefficients mapped onto psychological constructs, making it suitable for applications ranging from AI safety to economic policy analysis.