Beyond the Last Answer: Your Reasoning Trace Uncovers More than You Think

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating large language models’ (LLMs) mathematical reasoning solely via final-answer accuracy risks overlooking spurious correctness—correct outputs arising from flawed or redundant intermediate reasoning steps. Method: We propose a training-free, zero-parameter sub-thought–driven evaluation framework: (1) automatically segmenting chain-of-thought (CoT) rationales into semantically coherent sub-thoughts; (2) prompting multiple independent continuations per sub-thought; and (3) aggregating results via majority voting and consistency confidence scoring to yield robust predictions. Contribution/Results: This is the first method to systematically uncover and exploit latent redundancy in sub-thought–level correctness, moving beyond conventional single-path CoT evaluation. On AIME2024 and AIME2025, it achieves +13% and +10% absolute accuracy gains over strong baselines, respectively. Crucially, its consistency metric reliably distinguishes erroneous reasoning from superficially correct outputs, enhancing diagnostic interpretability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) leverage step-by-step reasoning to solve complex problems. Standard evaluation practice involves generating a complete reasoning trace and assessing the correctness of the final answer presented at its conclusion. In this paper, we challenge the reliance on the final answer by posing the following two questions: Does the final answer reliably represent the model's optimal conclusion? Can alternative reasoning paths yield different results? To answer these questions, we analyze intermediate reasoning steps, termed subthoughts, and propose a method based on our findings. Our approach involves segmenting a reasoning trace into sequential subthoughts based on linguistic cues. We start by prompting the model to generate continuations from the end-point of each intermediate subthought. We extract a potential answer from every completed continuation originating from different subthoughts. We find that aggregating these answers by selecting the most frequent one (the mode) often yields significantly higher accuracy compared to relying solely on the answer derived from the original complete trace. Analyzing the consistency among the answers derived from different subthoughts reveals characteristics that correlate with the model's confidence and correctness, suggesting potential for identifying less reliable answers. Our experiments across various LLMs and challenging mathematical reasoning datasets (AIME2024 and AIME2025) show consistent accuracy improvements, with gains reaching up to 13% and 10% respectively. Implementation is available at: https://github.com/hammoudhasan/SubthoughtReasoner.
Problem

Research questions and friction points this paper is trying to address.

Does the final answer reliably represent the model's optimal conclusion?
Can alternative reasoning paths yield different results?
How to improve accuracy by analyzing intermediate reasoning steps?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segment reasoning traces into subthoughts linguistically
Generate continuations from intermediate subthought endpoints
Aggregate answers by selecting the most frequent mode
🔎 Similar Papers
No similar papers found.