Beyond Semantics: The Unreasonable Effectiveness of Reasonless Intermediate Tokens

๐Ÿ“… 2025-05-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work challenges the implicit assumption in Chain-of-Thought (CoT) prompting that intermediate reasoning tokens must be semantically correct to yield accurate final answers. Method: Leveraging formally verifiable reasoning traces generated by an A* solver, the study employs constrained supervised training and semantic-interpretability-aware modeling of intermediate representations to systematically assess the causal impact of intermediate steps on final answer accuracy. Contribution/Results: Empirical results demonstrate that semantically correct intermediate reasoning is not necessary for task success; surprisingly, training with semantically irrelevant (noisy) reasoning traces improves both in-distribution accuracy and out-of-distribution (OOD) generalization. These findings contradict the anthropomorphic interpretability paradigm underlying CoT, revealing no positive correlation between semantic fidelity of intermediate representations and model performance. The work thus provides a novel perspective on reasoning model designโ€”shifting focus from semantic plausibility of intermediate steps toward functional utility in end-to-end inference.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent impressive results from large reasoning models have been interpreted as a triumph of Chain of Thought (CoT), and especially of the process of training on CoTs sampled from base LLMs in order to help find new reasoning patterns. In this paper, we critically examine that interpretation by investigating how the semantics of intermediate tokens-often anthropomorphized as"thoughts"or reasoning traces and which are claimed to display behaviors like backtracking, self-verification etc.-actually influence model performance. We train transformer models on formally verifiable reasoning traces and solutions, constraining both intermediate steps and final outputs to align with those of a formal solver (in our case, A* search). By constructing a formal interpreter of the semantics of our problems and intended algorithm, we systematically evaluate not only solution accuracy but also the correctness of intermediate traces, thus allowing us to evaluate whether the latter causally influences the former. We notice that, despite significant improvements on the solution-only baseline, models trained on entirely correct traces still produce invalid reasoning traces when arriving at correct solutions. To further show that trace accuracy is only loosely connected to solution accuracy, we then train models on noisy, corrupted traces which have no relation to the specific problem each is paired with, and find that not only does performance remain largely consistent with models trained on correct data, but in some cases can improve upon it and generalize more robustly on out-of-distribution tasks. These results challenge the assumption that intermediate tokens or"Chains of Thought"induce predictable reasoning behaviors and caution against anthropomorphizing such outputs or over-interpreting them (despite their mostly correct forms) as evidence of human-like or algorithmic behaviors in language models.
Problem

Research questions and friction points this paper is trying to address.

Examining impact of intermediate token semantics on model performance
Assessing correctness of reasoning traces versus solution accuracy
Challenging assumptions about Chain of Thought reasoning behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training models on formally verifiable reasoning traces
Using corrupted traces to test solution accuracy
Challenging Chain of Thought assumptions with noisy data
๐Ÿ”Ž Similar Papers
No similar papers found.