Measuring and mitigating overreliance is necessary for building human-compatible AI

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the critical problem of human overreliance on large language models (LLMs) in high-stakes domains such as healthcare and personalized advice. Identifying three key gaps in measuring overreliance, the study proposes a novel, interdisciplinary framework integrating human factors engineering, cognitive science, and AI system design. Methodologically, it analyzes overreliance through three synergistic lenses: systemic design flaws, user cognitive biases, and model uncertainty—thereby enabling both quantitative measurement and tiered intervention strategies. The contributions include: (1) clarifying severe consequences—high-risk errors, governance failures, and cognitive atrophy; (2) reframing LLM development toward human capability augmentation rather than automation; and (3) establishing theoretical foundations and actionable pathways for safe, trustworthy human-AI collaboration. This work advances responsible AI deployment by bridging technical design, cognitive understanding, and socio-technical governance.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) distinguish themselves from previous technologies by functioning as collaborative "thought partners," capable of engaging more fluidly in natural language. As LLMs increasingly influence consequential decisions across diverse domains from healthcare to personal advice, the risk of overreliance - relying on LLMs beyond their capabilities - grows. This position paper argues that measuring and mitigating overreliance must become central to LLM research and deployment. First, we consolidate risks from overreliance at both the individual and societal levels, including high-stakes errors, governance challenges, and cognitive deskilling. Then, we explore LLM characteristics, system design features, and user cognitive biases that - together - raise serious and unique concerns about overreliance in practice. We also examine historical approaches for measuring overreliance, identifying three important gaps and proposing three promising directions to improve measurement. Finally, we propose mitigation strategies that the AI research community can pursue to ensure LLMs augment rather than undermine human capabilities.
Problem

Research questions and friction points this paper is trying to address.

Addressing overreliance risks in LLMs across healthcare and personal advice domains
Measuring overreliance gaps in human-AI collaboration systems and cognitive biases
Developing mitigation strategies to prevent AI from undermining human capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measuring overreliance through improved evaluation methods
Mitigating risks via system design and bias awareness
Developing strategies to augment human capabilities
🔎 Similar Papers
No similar papers found.