Humans Perceive Wrong Narratives from AI Reasoning Texts

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether humans can accurately discern the actual computational process reflected in AI-generated stepwise reasoning texts—and thus whether such texts possess genuine transparency and interpretability. Using a counterfactual causal chain identification paradigm, we conducted human-subject experiments wherein participants identified which reasoning steps genuinely influenced subsequent derivations. Results show an average human accuracy of only 29.3% (vs. a 25% random baseline), rising to just 42% even on high-consensus items—revealing a fundamental misalignment between human cognition and model reasoning in linguistic representation and causal logic. The core contributions are threefold: (1) first empirical evidence that reasoning traces cannot be taken at face value and must instead be treated as decodable, non-human linguistic artifacts; (2) introduction of the “reasoning text as artifact” conceptual framework, foregrounding cross-agent language understanding; and (3) a theoretical and methodological advance for evaluating explainable AI.

Technology Category

Application Category

📝 Abstract
A new generation of AI models generates step-by-step reasoning text before producing an answer. This text appears to offer a human-readable window into their computation process, and is increasingly relied upon for transparency and interpretability. However, it is unclear whether human understanding of this text matches the model's actual computational process. In this paper, we investigate a necessary condition for correspondence: the ability of humans to identify which steps in a reasoning text causally influence later steps. We evaluated humans on this ability by composing questions based on counterfactual measurements and found a significant discrepancy: participant accuracy was only 29.3%, barely above chance (25%), and remained low (42%) even when evaluating the majority vote on questions with high agreement. Our results reveal a fundamental gap between how humans interpret reasoning texts and how models use it, challenging its utility as a simple interpretability tool. We argue that reasoning texts should be treated as an artifact to be investigated, not taken at face value, and that understanding the non-human ways these models use language is a critical research direction.
Problem

Research questions and friction points this paper is trying to address.

Humans struggle to identify causal reasoning steps in AI texts
Human interpretation of AI reasoning significantly differs from model computation
AI reasoning texts require investigation rather than face-value acceptance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-AI reasoning text causal analysis
Counterfactual question evaluation method
Revealing interpretation-computation discrepancy gap
🔎 Similar Papers
No similar papers found.