To Rely or Not to Rely? Evaluating Interventions for Appropriate Reliance on Large Language Models

📅 2024-12-20
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of achieving *appropriate reliance*—neither over-reliance nor under-reliance—on large language model (LLM) suggestions in human-AI collaborative decision-making. Through a randomized online controlled experiment (N = 400), we systematically evaluate three classes of reliance interventions across two distinct task paradigms: LSAT-style logical reasoning and image-based numerical estimation. Introducing *appropriate reliance rate* as a gold-standard metric, we find that none of the interventions significantly improve appropriate reliance; while over-reliance decreases, under-reliance increases substantially. Notably, ~37% of erroneous acceptances of LLM suggestions are accompanied by heightened user confidence—indicating severe miscalibration between confidence and accuracy. This work establishes a rigorous, behaviorally grounded evaluation framework for LLM reliance interventions and reveals a fundamental limitation of current approaches in calibrating human reliance behavior.

Technology Category

Application Category

📝 Abstract
As Large Language Models become integral to decision-making, optimism about their power is tempered with concern over their errors. Users may over-rely on LLM advice that is confidently stated but wrong, or under-rely due to mistrust. Reliance interventions have been developed to help users of LLMs, but they lack rigorous evaluation for appropriate reliance. We benchmark the performance of three relevant interventions by conducting a randomized online experiment with 400 participants attempting two challenging tasks: LSAT logical reasoning and image-based numerical estimation. For each question, participants first answered independently, then received LLM advice modified by one of three reliance interventions and answered the question again. Our findings indicate that while interventions reduce over-reliance, they generally fail to improve appropriate reliance. Furthermore, people became more confident after making wrong reliance decisions in certain contexts, demonstrating poor calibration. Based on our findings, we discuss implications for designing effective reliance interventions in human-LLM collaboration.
Problem

Research questions and friction points this paper is trying to address.

Evaluating interventions for appropriate reliance on LLMs
Assessing over-reliance and under-reliance on LLM advice
Improving human-LLM collaboration through effective reliance interventions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized online experiment with 400 participants
Three reliance interventions benchmarked for LLM use
LSAT and image-based tasks used for evaluation
🔎 Similar Papers
No similar papers found.