ART: Adaptive Reasoning Trees for Explainable Claim Verification

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of establishing trustworthiness in high-stakes fact-checking with large language models, where opacity and lack of contestability often undermine reliability. To this end, the paper proposes Adaptive Reasoning Trees (ART), introducing a novel, contestable hierarchical argumentation mechanism. ART constructs tree-structured arguments comprising supporting and rebutting sub-claims, which are then evaluated through pairwise, bottom-up comparisons by a referee large language model to produce transparent and verifiable reasoning paths. This approach overcomes the limitations of traditional chain-of-thought reasoning by enabling structured, multi-perspective deliberation. Experimental results demonstrate that ART consistently outperforms strong baselines across multiple benchmark datasets, setting a new standard for interpretable and reliable fact-checking systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are powerful candidates for complex decision-making, leveraging vast encoded knowledge and remarkable zero-shot abilities. However, their adoption in high-stakes environments is hindered by their opacity; their outputs lack faithful explanations and cannot be effectively contested to correct errors, undermining trustworthiness. In this paper, we propose ART (Adaptive Reasoning Trees), a hierarchical method for claim verification. The process begins with a root claim, which branches into supporting and attacking child arguments. An argument's strength is determined bottom-up via a pairwise tournament of its children, adjudicated by a judge LLM, allowing a final, transparent and contestable verdict to be systematically derived which is missing in methods like Chain-of-Thought (CoT). We empirically validate ART on multiple datasets, analyzing different argument generators and comparison strategies. Our findings show that ART's structured reasoning outperforms strong baselines, establishing a new benchmark for explainable claim verification which is more reliable and ensures clarity in the overall decision making step.
Problem

Research questions and friction points this paper is trying to address.

explainable AI
claim verification
trustworthiness
reasoning transparency
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Reasoning Trees
Explainable Claim Verification
Hierarchical Argumentation
Contestable AI
LLM-based Reasoning
🔎 Similar Papers
No similar papers found.