🤖 AI Summary
This study investigates the conditions under which AI assistance enhances or impairs human decision-making, with a particular focus on scenarios characterized by overlapping information between humans and AI and imperfect human adherence to AI recommendations. By developing a Bayesian decision-theoretic model, the authors decompose the effect of AI assistance into the marginal value of information and human behavioral biases. They introduce, for the first time, a micro-founded measure of information overlap that unifies the characterization of collaborative modes—augmentation, complementarity, impairment, and automation. Incorporating the cognitive bias of “correlation neglect,” the analysis integrates insights from information economics and decision theory to reveal how AI capability and information overlap jointly determine collaborative performance, accurately predicting shifts in human decisions and providing a theoretical foundation for designing effective human-AI collaboration systems.
📝 Abstract
We develop a decision-theoretic model of human-AI interaction to study when AI assistance improves or impairs human decision-making. A human decision-maker observes private information and receives a recommendation from an AI system, but may combine these signals imperfectly. We show that the effect of AI assistance decomposes into two main forces: the marginal informational value of the AI beyond what the human already knows, and a behavioral distortion arising from how the human uses the AI's recommendation. Central to our analysis is a micro-founded measure of informational overlap between human and AI knowledge. We study an empirically relevant form of imperfect decision-making -- correlation neglect -- whereby humans treat AI recommendations as independent of their own information despite shared evidence. Under this model, we characterize how overlap and AI capabilities shape the Human-AI interaction regime between augmentation, impairment, complementarity, and automation, and draw key insights.