🤖 AI Summary
Humans systematically over-rely on AI recommendations in human-AI collaborative decision-making, undermining complementary advantages. This paper addresses incentive misalignment—the root cause—by proposing a context-sensitive incentive mechanism design framework that dynamically aligns incentive structures with task characteristics and human-AI capability complementarity. Through controlled behavioral experiments, we empirically evaluate how diverse incentive conditions affect human adoption of AI advice and overall decision quality. Results show that our mechanism significantly reduces over-reliance (p < 0.01) and improves both decision accuracy and efficiency; conversely, misaligned incentives induce strategic misuse and degrade collaborative performance. To our knowledge, this is the first work integrating formal incentive design with human-AI complementarity modeling. It provides an actionable theoretical framework and empirical foundation for building high-trust, high-performance human-AI collaborative systems.
📝 Abstract
Collaboration with artificial intelligence (AI) has improved human decision-making across various domains by leveraging the complementary capabilities of humans and AI. Yet, humans systematically overrely on AI advice, even when their independent judgment would yield superior outcomes, fundamentally undermining the potential of human-AI complementarity. Building on prior work, we identify prevailing incentive structures in human-AI decision-making as a structural driver of this overreliance. To address this misalignment, we propose an alternative incentive mechanism designed to counteract systemic overreliance. We empirically evaluate this approach through a behavioral experiment with 180 participants, finding that the proposed mechanism significantly reduces overreliance. We also show that while appropriately designed incentives can enhance collaboration and decision quality, poorly designed incentives may distort behavior, introduce unintended consequences, and ultimately degrade performance. These findings underscore the importance of aligning incentives with task context and human-AI complementarities, and suggest that effective collaboration requires a shift toward context-sensitive incentive design.