Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models often produce responses in high-stakes scenarios whose correctness is difficult for users to verify based on the provided reasoning. The paper introduces “error verifiability” as a novel quality dimension distinct from accuracy and proposes a balanced metric, \( v_{\text{bal}} \), to quantify how effectively a model’s rationale aids human judgment. To enhance verifiability, the authors develop two external-information-augmented rephrasing methods: Reflect-and-Rephrase (RR) for mathematical reasoning and Oracle-Rephrase (OR) for factual question answering. Experimental results demonstrate that conventional training and scaling approaches fail to improve verifiability, whereas RR and OR substantially increase human accuracy in judging answer correctness.
📝 Abstract
As LLMs are deployed in high-stakes settings, users must judge the correctness of individual responses, often relying on model-generated justifications such as reasoning chains or explanations. Yet, no standard measure exists for whether these justifications help users distinguish correct answers from incorrect ones. We formalize this idea as error verifiability and propose $v_{\text{bal}}$, a balanced metric that measures whether justifications enable raters to accurately assess answer correctness, validated against human raters who show high agreement. We find that neither common approaches, such as post-training and model scaling, nor more targeted interventions recommended improve verifiability. We introduce two methods that succeed at improving verifiability: reflect-and-rephrase (RR) for mathematical reasoning and oracle-rephrase (OR) for factual QA, both of which improve verifiability by incorporating domain-appropriate external information. Together, our results establish error verifiability as a distinct dimension of response quality that does not emerge from accuracy improvements alone and requires dedicated, domain-aware methods to address.
Problem

Research questions and friction points this paper is trying to address.

error verifiability
large language models
justifications
response correctness
human evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

error verifiability
justification quality
reflect-and-rephrase
oracle-rephrase
human-aligned evaluation
🔎 Similar Papers
No similar papers found.