Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
结合计算论证与大语言模型,通过论证框架挖掘、合成与推理,实现可辩论、可修正的人机协同决策。

Technology Category

Application Category

📝 Abstract
Computational argumentation offers formal frameworks for transparent, verifiable reasoning but has traditionally been limited by its reliance on domain-specific information and extensive feature engineering. In contrast, LLMs excel at processing unstructured text, yet their opaque nature makes their reasoning difficult to evaluate and trust. We argue that the convergence of these fields will lay the foundation for a new paradigm: Argumentative Human-AI Decision-Making. We analyze how the synergy of argumentation framework mining, argumentation framework synthesis, and argumentative reasoning enables agents that do not just justify decisions, but engage in dialectical processes where decisions are contestable and revisable -- reasoning with humans rather than for them. This convergence of computational argumentation and LLMs is essential for human-aware, trustworthy AI in high-stakes domains.
Problem

Research questions and friction points this paper is trying to address.

Argumentative Human-AI Decision-Making
Computational Argumentation
Large Language Models
Trustworthy AI
Dialectical Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

computational argumentation
large language models
argumentative reasoning
human-AI collaboration
trustworthy AI
🔎 Similar Papers
No similar papers found.