Improving Human Verification of LLM Reasoning through Interactive Explanation Interfaces

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language model (LLM)-generated reasoning traces—such as chain-of-thought (CoT), program-of-thought (PoT), and graph-structured reasoning—can enhance human comprehension of mathematical solution logic and error detection capability. Method: We propose three interactive reasoning interfaces—iCoT, iPoT, and iGraph—and design an automated framework for converting standard CoT traces into interactive, node-based formats. These interfaces support dynamic expansion, node-level traceability, and visual verification to improve explainability and auditability. Contribution/Results: In an empirical study with 125 participants, iGraph achieved an 85.6% error detection rate (+22.3% over baseline), reduced average response time to 57.9 seconds (−38.1%), and significantly improved perceived clarity and trust. This work introduces the first systematic human-AI collaborative interaction paradigm explicitly designed for mathematical reasoning verification, advancing trustworthy AI applications in education.

Technology Category

Application Category

📝 Abstract
The reasoning capabilities of Large Language Models (LLMs) have led to their increasing employment in several critical applications, particularly education, where they support problem-solving, tutoring, and personalized study. While there are a plethora of works showing the effectiveness of LLMs in generating step-by-step solutions through chain-of-thought (CoT) reasoning on reasoning benchmarks, little is understood about whether the generated CoT is helpful for end-users in improving their ability to comprehend mathematical reasoning problems and detect errors/hallucinations in LLM-generated solutions. To address this gap and contribute to understanding how reasoning can improve human-AI interaction, we present three new interactive reasoning interfaces: interactive CoT (iCoT), interactive Program-of-Thought (iPoT), and interactive Graph (iGraph), and a novel framework that generates the LLM's reasoning from traditional CoT to alternative, interactive formats. Across 125 participants, we found that interactive interfaces significantly improved performance. Specifically, the iGraph interface yielded the highest clarity and error detection rate (85.6%), followed by iPoT (82.5%), iCoT (80.6%), all outperforming standard CoT (73.5%). Interactive interfaces also led to faster response times, where participants using iGraph were fastest (57.9 secs), compared to iCoT and iPoT (60 secs), and the standard CoT baseline (64.7 secs). Furthermore, participants preferred the iGraph reasoning interface, citing its superior ability to enable users to follow the LLM's reasoning process. We discuss the implications of these results and provide recommendations for the future design of reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Improving human verification of LLM reasoning through interactive explanation interfaces
Enhancing error detection in LLM-generated mathematical solutions via interactive formats
Evaluating interactive reasoning interfaces for better human comprehension of AI reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive CoT interface improves human verification
Interactive PoT interface enhances error detection rate
Interactive Graph interface provides highest clarity performance
R
Runtao Zhou
University of Virginia
G
Giang Nguyen
Auburn University
N
Nikita Kharya
Independent Researcher
A
Anh Nguyen
Auburn University
Chirag Agarwal
Chirag Agarwal
Assistant Professor, UVA
XAITrustworthyMLArtificial Intelligence