Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making

📅 2024-03-25
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
In current AI-assisted decision-making, humans typically passively accept or reject AI recommendations, lacking fine-grained expression of disagreement and deep, iterative negotiation—thereby impeding reflective thinking and collaborative efficacy. Method: This paper proposes a novel human-AI negotiation paradigm, introducing an LLM-based negotiative AI architecture that supports dimension-level opinion articulation, interpretable dialogue, and dynamic decision updating. Integrating human-AI collaboration theory, dimension-wise analytical mechanisms, and a hybrid evaluation approach (behavioral analysis + subjective feedback), the framework is validated in graduate admissions tasks. Contribution/Results: It significantly improves appropriate human trust (+28.3%) and task accuracy (+19.7%), while receiving strong positive user ratings on explainability and negotiation experience. Its novelty lies in systematically embedding negotiation mechanisms into the AI decision loop—enabling a paradigm shift from “review-and-adopt” to “co-constructive decision-making.”

Technology Category

Application Category

📝 Abstract
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole. In such a paradigm, humans are found to rarely trigger analytical thinking and face difficulties in communicating the nuances of conflicting opinions to the AI when disagreements occur. To tackle this challenge, we propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making. Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates. To empower AI with deliberative capabilities, we designed Deliberative AI, which leverages large language models (LLMs) as a bridge between humans and domain-specific models to enable flexible conversational interactions and faithful information provision. An exploratory evaluation on a graduate admissions task shows that Deliberative AI outperforms conventional explainable AI (XAI) assistants in improving humans' appropriate reliance and task performance. Based on a mixed-methods analysis of participant behavior, perception, user experience, and open-ended feedback, we draw implications for future AI-assisted decision tool design.
Problem

Research questions and friction points this paper is trying to address.

Promotes human reflection on conflicting AI opinions.
Enables dimension-level discussion between humans and AI.
Improves human reliance and decision-making performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-AI Deliberation framework promotes reflective decision-making.
LLMs bridge humans and domain-specific models for interaction.
Deliberative AI improves reliance and task performance.