Performative Thinking? The Brittle Correlation Between CoT Length and Problem Complexity

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing assumption that Chain-of-Thought (CoT) length reflects underlying problem complexity. Method: The authors design a maze-solving task grounded in A* search paths, training Transformer models from scratch to generate CoT sequences; they then rigorously evaluate the correlation between generated CoT length and true computational difficulty—defined as the optimal A* path length—under both in-distribution and out-of-distribution settings. Contribution/Results: Empirical analysis reveals only weak correlation between CoT length and optimal path length, confined to in-distribution samples. Models frequently produce unnecessarily long chains for trivial problems, indicating reliance on statistical pattern memorization rather than adaptive, depth-sensitive reasoning. This study is the first to empirically refute the “longer chain implies deeper reasoning” heuristic via controlled trajectory generation and strict in-/out-of-distribution evaluation. It provides foundational insights for rethinking CoT interpretability, validity, and the modeling of structured reasoning in language models.

Technology Category

Application Category

📝 Abstract
Intermediate token generation (ITG), where a model produces output before the solution, has been proposed as a method to improve the performance of language models on reasoning tasks. While these reasoning traces or Chain of Thoughts (CoTs) are correlated with performance gains, the mechanisms underlying them remain unclear. A prevailing assumption in the community has been to anthropomorphize these tokens as "thinking", treating longer traces as evidence of higher problem-adaptive computation. In this work, we critically examine whether intermediate token sequence length reflects or correlates with problem difficulty. To do so, we train transformer models from scratch on derivational traces of the A* search algorithm, where the number of operations required to solve a maze problem provides a precise and verifiable measure of problem complexity. We first evaluate the models on trivial free-space problems, finding that even for the simplest tasks, they often produce excessively long reasoning traces and sometimes fail to generate a solution. We then systematically evaluate the model on out-of-distribution problems and find that the intermediate token length and ground truth A* trace length only loosely correlate. We notice that the few cases where correlation appears are those where the problems are closer to the training distribution, suggesting that the effect arises from approximate recall rather than genuine problem-adaptive computation. This suggests that the inherent computational complexity of the problem instance is not a significant factor, but rather its distributional distance from the training data. These results challenge the assumption that intermediate trace generation is adaptive to problem difficulty and caution against interpreting longer sequences in systems like R1 as automatically indicative of "thinking effort".
Problem

Research questions and friction points this paper is trying to address.

Examining correlation between CoT length and problem complexity
Challenging assumption that longer reasoning traces indicate thinking
Assessing if intermediate tokens reflect genuine adaptive computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated CoT length against A* search complexity
Found weak correlation with problem difficulty
Attributed effects to training distribution proximity
🔎 Similar Papers
No similar papers found.