Adaptive Collaboration of Arena-Based Argumentative LLMs for Explainable and Contestable Legal Reasoning

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of verifiable and contestable structured explanations in current large language models for legal reasoning, which hinders auditability and user intervention. The authors propose a neuro-symbolic hybrid framework that, for the first time, integrates adaptive multi-agent collaboration with a formal argumentation system. Built upon an arena-based Quantitative Bipolar Argumentation Framework (A-QBAF), the approach dynamically generates, adjudicates, and refines legal arguments. It introduces mechanisms for conflict resolution, uncertainty-aware escalation, and a human-in-the-loop contestable workflow, thereby enabling transparent and intervenable reasoning. Evaluated on the LegalBench benchmark, the model—implemented within the Gemini-2.5-Flash architecture—significantly outperforms strong baselines, achieving both high predictive accuracy and structured interpretability.

Technology Category

Application Category

📝 Abstract
Legal reasoning requires not only high accuracy but also the ability to justify decisions through verifiable and contestable arguments. However, existing Large Language Model (LLM) approaches, such as Chain-of-Thought (CoT) and Retrieval-Augmented Generation (RAG), often produce unstructured explanations that lack a formal mechanism for verification or user intervention. To address this limitation, we propose Adaptive Collaboration of Argumentative LLMs (ACAL), a neuro-symbolic framework that integrates adaptive multi-agent collaboration with an Arena-based Quantitative Bipolar Argumentation Framework (A-QBAF). ACAL dynamically deploys expert agent teams to construct arguments, employs a clash resolution mechanism to adjudicate conflicting claims, and utilizes uncertainty-aware escalation for borderline cases. Crucially, our framework supports a Human-in-the-Loop (HITL) contestability workflow, enabling users to directly audit and modify the underlying reasoning graph to influence the final judgment. Empirical evaluations on the LegalBench benchmark demonstrate that ACAL outperforms strong baselines across Gemini-2.5-Flash-Lite and Gemini-2.5-Flash architectures, effectively balancing efficient predictive performance with structured transparency and contestability. Our implementation is available at: https://github.com/loc110504/ACAL.
Problem

Research questions and friction points this paper is trying to address.

legal reasoning
explainability
contestability
argumentation
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic framework
argumentative multi-agent collaboration
quantitative bipolar argumentation
human-in-the-loop contestability
adaptive legal reasoning
🔎 Similar Papers
No similar papers found.
H
Hoang-Loc Cao
Ho Chi Minh University of Science, Vietnam
P
Phuc Ho
Ho Chi Minh University of Science, Vietnam
Truong Thanh Hung Nguyen
Truong Thanh Hung Nguyen
University of New Brunswick, National Research Council Canada
Contestable AIExplainable AIHuman-centered AIEdge Computing
P
Phuc Truong Loc Nguyen
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
D
Dinh Thien Loc Nguyen
Ho Chi Minh University of Science, Vietnam
Hung Cao
Hung Cao
University of New Brunswick
Analytics EverywhereInternet of Things (IoT)Edge/Fog/Cloud ComputingMachine LearningSmart Cities