Finding the Cracks: Improving LLMs Reasoning with Paraphrastic Probing and Consistency Verification

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of large language models to hallucination and error propagation in complex reasoning, particularly their difficulty in identifying and correcting critical tokens. To mitigate this, the authors propose the PPCV framework, which first identifies critical tokens by comparing reasoning paths from the original and rewritten versions of a question, leveraging mismatches between predicted and expected tokens. These critical tokens are then replaced to generate multiple alternative reasoning paths, and the final answer is determined through cross-path consistency verification. This approach uniquely integrates question rewriting with consistency-based validation for the detection and correction of critical tokens, significantly enhancing reasoning robustness and accuracy across multiple benchmarks and outperforming existing baseline methods.

Technology Category

Application Category

📝 Abstract
Large language models have demonstrated impressive performance across a variety of reasoning tasks. However, their problem-solving ability often declines on more complex tasks due to hallucinations and the accumulation of errors within these intermediate steps. Recent work has introduced the notion of critical tokens--tokens in the reasoning process that exert significant influence on subsequent steps. Prior studies suggest that replacing critical tokens can refine reasoning trajectories. Nonetheless, reliably identifying and exploiting critical tokens remains challenging. To address this, we propose the Paraphrastic Probing and Consistency Verification~(PPCV) framework. PPCV operates in two stages. In the first stage, we roll out an initial reasoning path from the original question and then concatenate paraphrased versions of the question with this reasoning path. And we identify critical tokens based on mismatches between the predicted top-1 token and the expected token in the reasoning path. A criterion is employed to confirm the final critical token. In the second stage, we substitute critical tokens with candidate alternatives and roll out new reasoning paths for both the original and paraphrased questions. The final answer is determined by checking the consistency of outputs across these parallel reasoning processes. We evaluate PPCV on mainstream LLMs across multiple benchmarks. Extensive experiments demonstrate PPCV substantially enhances the reasoning performance of LLMs compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

large language models
reasoning
hallucinations
critical tokens
error accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

critical tokens
paraphrastic probing
consistency verification
reasoning refinement
hallucination mitigation