Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

📅 2024-10-27
🏛️ arXiv.org
📈 Citations: 30
Influential: 1
📄 PDF
🤖 AI Summary
This study systematically investigates scenarios where chain-of-thought (CoT) prompting consistently degrades the performance of large language and multimodal models—motivated by cognitive psychology findings that verbalization impairs human performance in implicit statistical learning, visual recognition, and exception-laden pattern classification. Method: We conduct controlled zero-shot versus CoT comparisons across multiple models (e.g., GPT-4o, o1-preview), employing cognitively grounded task designs and rigorous variable control. Contribution/Results: We identify, for the first time, robust negative effects of CoT across all three task categories, with absolute accuracy drops up to 36.3%. Our work establishes a novel paradigm—“predicting model reasoning failure from human cognitive failure”—challenging the prevailing assumption that CoT is universally beneficial. We formally delineate task boundaries where CoT is detrimental, and further uncover counterintuitive cases where human performance deteriorates under verbalization while model performance improves.

Technology Category

Application Category

📝 Abstract
Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that while there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it negatively impacts models. By connecting the literature on human deliberation with evaluations of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning.
Problem

Research questions and friction points this paper is trying to address.

Identify tasks where Chain-of-Thought reduces model performance
Compare CoT effects on models and human cognitive processes
Analyze negative impacts of deliberation in AI and humans
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates CoT performance drop-offs in tasks
Links human cognitive psychology to model behavior
Evaluates CoT effects across six psychological tasks
🔎 Similar Papers
No similar papers found.