🤖 AI Summary
This study addresses the challenge of achieving *appropriate reliance*—neither over-reliance nor under-reliance—on large language model (LLM) suggestions in human-AI collaborative decision-making. Through a randomized online controlled experiment (N = 400), we systematically evaluate three classes of reliance interventions across two distinct task paradigms: LSAT-style logical reasoning and image-based numerical estimation. Introducing *appropriate reliance rate* as a gold-standard metric, we find that none of the interventions significantly improve appropriate reliance; while over-reliance decreases, under-reliance increases substantially. Notably, ~37% of erroneous acceptances of LLM suggestions are accompanied by heightened user confidence—indicating severe miscalibration between confidence and accuracy. This work establishes a rigorous, behaviorally grounded evaluation framework for LLM reliance interventions and reveals a fundamental limitation of current approaches in calibrating human reliance behavior.
📝 Abstract
As Large Language Models become integral to decision-making, optimism about their power is tempered with concern over their errors. Users may over-rely on LLM advice that is confidently stated but wrong, or under-rely due to mistrust. Reliance interventions have been developed to help users of LLMs, but they lack rigorous evaluation for appropriate reliance. We benchmark the performance of three relevant interventions by conducting a randomized online experiment with 400 participants attempting two challenging tasks: LSAT logical reasoning and image-based numerical estimation. For each question, participants first answered independently, then received LLM advice modified by one of three reliance interventions and answered the question again. Our findings indicate that while interventions reduce over-reliance, they generally fail to improve appropriate reliance. Furthermore, people became more confident after making wrong reliance decisions in certain contexts, demonstrating poor calibration. Based on our findings, we discuss implications for designing effective reliance interventions in human-LLM collaboration.