π€ AI Summary
This work addresses the limited interpretability and lack of user intervention capabilities of large language models (LLMs) in binary decision tasks by proposing and implementing an interactive visualization system based on Argumentative Large Language Models (ArgLLM). The system enables LLMs to generate structured reasoning processes, provides intuitive visual explanations, and allows users to actively interrogate and correct reasoning errors in real time. Furthermore, it integrates trusted external knowledge sources to strengthen the evidential basis of decisions. As the first interactive platform supporting humanβAI collaborative argumentation within the ArgLLM framework, the publicly released system ArgLLM-App (https://argllm.app) demonstrates its effectiveness in enhancing decision transparency, model interpretability, and meaningful user participation.
π Abstract
Argumentative LLMs (ArgLLMs) are an existing approach leveraging Large Language Models (LLMs) and computational argumentation for decision-making, with the aim of making the resulting decisions faithfully explainable to and contestable by humans. Here we propose a web-based system implementing ArgLLM-empowered agents for binary tasks. ArgLLM-App supports visualisation of the produced explanations and interaction with human users, allowing them to identify and contest any mistakes in the system's reasoning. It is highly modular and enables drawing information from trusted external sources. ArgLLM-App is publicly available at https://argllm.app, with a video demonstration at https://youtu.be/vzwlGOr0sPM.