🤖 AI Summary
In current AI-assisted decision-making, humans typically passively accept or reject AI recommendations, lacking fine-grained expression of disagreement and deep, iterative negotiation—thereby impeding reflective thinking and collaborative efficacy.
Method: This paper proposes a novel human-AI negotiation paradigm, introducing an LLM-based negotiative AI architecture that supports dimension-level opinion articulation, interpretable dialogue, and dynamic decision updating. Integrating human-AI collaboration theory, dimension-wise analytical mechanisms, and a hybrid evaluation approach (behavioral analysis + subjective feedback), the framework is validated in graduate admissions tasks.
Contribution/Results: It significantly improves appropriate human trust (+28.3%) and task accuracy (+19.7%), while receiving strong positive user ratings on explainability and negotiation experience. Its novelty lies in systematically embedding negotiation mechanisms into the AI decision loop—enabling a paradigm shift from “review-and-adopt” to “co-constructive decision-making.”
📝 Abstract
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole. In such a paradigm, humans are found to rarely trigger analytical thinking and face difficulties in communicating the nuances of conflicting opinions to the AI when disagreements occur. To tackle this challenge, we propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making. Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates. To empower AI with deliberative capabilities, we designed Deliberative AI, which leverages large language models (LLMs) as a bridge between humans and domain-specific models to enable flexible conversational interactions and faithful information provision. An exploratory evaluation on a graduate admissions task shows that Deliberative AI outperforms conventional explainable AI (XAI) assistants in improving humans' appropriate reliance and task performance. Based on a mixed-methods analysis of participant behavior, perception, user experience, and open-ended feedback, we draw implications for future AI-assisted decision tool design.