🤖 AI Summary
This study investigates how artificial intelligence (AI) predictions influence human decision-making, with a particular focus on the psychological mechanism whereby individuals forgo certain gains due to belief in AI’s ability to predict their behavior. Through a large-scale preregistered randomized controlled experiment (N = 1,305) replicating Newcomb’s paradox and complemented by statistical modeling, the research demonstrates for the first time that when AI is perceived as a predictive authority, people are significantly more likely to restrict their own choice freedom—even when the AI’s prediction proves incorrect. Over 40% of participants treated AI as an authority, increasing their likelihood of rejecting a guaranteed reward by 3.39-fold and reducing actual earnings by 10.7%–42.9%. These effects remain robust across varied AI presentation formats and decision contexts, thereby advancing our understanding of belief–behavior dynamics in human–AI interaction.
📝 Abstract
Artificial intelligence (AI) is understood to affect the content of people's decisions. Here, using a behavioral implementation of the classic Newcomb's paradox in 1,305 participants, we show that AI can also change how people decide. In this paradigm, belief in predictive authority can lead individuals to constrain decision-making, forgoing a guaranteed reward. Over 40% of participants treated AI as such a predictive authority. This significantly increased the odds of forgoing the guaranteed reward by a factor of 3.39 (95% CI: 2.45-4.70) compared with random framing, and reduced earnings by 10.7-42.9%. The effect appeared across AI presentations and decision contexts and persisted even when predictions failed. When people believe AI can predict their behavior, they may self-constrain it in anticipation of that prediction.