Argumentation for Explainable and Globally Contestable Decision Support with LLMs

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deploying large language models (LLMs) in high-stakes decision-making, where their opacity and unpredictability hinder trust, and existing explainability methods offer only local, instance-level justifications that cannot rectify general decision logic. To overcome these limitations, the authors propose ArgEval, a novel framework integrating computational argumentation theory, ontological modeling, and LLMs to construct task-specific option ontologies and general-purpose argumentation frameworks (AFs) for structured evaluation of decision alternatives. ArgEval introduces, for the first time, a global contestability mechanism grounded in shared argument structures, transcending the traditional constraints of binary choices and localized explanations, thereby enabling systematic correction of flawed reasoning. Evaluated on glioblastoma treatment recommendation, ArgEval produces clinically aligned, interpretable suggestions, demonstrating its efficacy and practical utility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit strong general capabilities, but their deployment in high-stakes domains is hindered by their opacity and unpredictability. Recent work has taken meaningful steps towards addressing these issues by augmenting LLMs with post-hoc reasoning based on computational argumentation, providing faithful explanations and enabling users to contest incorrect decisions. However, this paradigm is limited to pre-defined binary choices and only supports local contestation for specific instances, leaving the underlying decision logic unchanged and prone to repeated mistakes. In this paper, we introduce ArgEval, a framework that shifts from instance-specific reasoning to structured evaluation of general decision options. Rather than mining arguments solely for individual cases, ArgEval systematically maps task-specific decision spaces, builds corresponding option ontologies, and constructs general argumentation frameworks (AFs) for each option. These frameworks can then be instantiated to provide explainable recommendations for specific cases while still supporting global contestability through modification of the shared AFs. We investigate the effectiveness of ArgEval on treatment recommendation for glioblastoma, an aggressive brain tumour, and show that it can produce explainable guidance aligned with clinical practice.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Argumentation
Global Contestability
Decision Support
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Argumentation Frameworks
Explainable AI
Global Contestability
Option Ontology
Structured Decision Evaluation
🔎 Similar Papers
No similar papers found.