Chain of Thoughtlessness? An Analysis of CoT in Planning

📅 2024-05-08
🏛️ Neural Information Processing Systems
📈 Citations: 25
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the generalization capability of chain-of-thought (CoT) prompting in large language models (LLMs) for reasoning, focusing on the canonical planning domain Blocksworld. Method: We conduct a systematic empirical analysis using two state-of-the-art LLMs on controlled-complexity Blocksworld tasks and scalable CoT benchmark variants. Contribution/Results: We find that CoT performance critically depends on strict structural alignment—e.g., stack height—between exemplars and queries, exhibiting negligible generalization across problem complexity or syntactic form. Its gains stem from problem-specific pattern matching rather than acquisition of general algorithms. This study provides the first evidence of a fundamental generalization bottleneck for CoT in classical planning and quantifies a significant trade-off between CoT efficacy and the human effort required to engineer high-quality reasoning traces. These findings challenge the prevailing hypothesis that CoT enables implicit algorithm learning.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated with chain of thought prompting-a method of demonstrating solution procedures-with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem. This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examines the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. We also create scalable variants of three domains commonly studied in previous CoT papers and demonstrate the existence of similar failure modes. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations but depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.
Problem

Research questions and friction points this paper is trying to address.

LLM performance on reasoning problems lacks generalization.
Chain of thought prompts require highly specific examples for improvement.
Performance gains from CoT depend on problem-specific prompt engineering.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes chain of thought prompting in LLMs
Tests performance on Blocksworld planning problems
Highlights need for problem-specific prompt engineering
🔎 Similar Papers
No similar papers found.