Enhancing Chain of Thought Prompting in Large Language Models via Reasoning Patterns

📅 2024-04-23
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing unsupervised chain-of-thought (CoT) prompting methods rely on semantic similarity for in-context example selection, which often introduces noise and suffers from poor interpretability, thereby limiting multi-step reasoning performance. To address this, we propose a reasoning-pattern-based demonstration selection framework. Our approach explicitly models implicit reasoning processes as structured “reasoning patterns”—a novel conceptualization—leveraging large language model priors, prompt engineering, and pattern clustering to construct task-specific, diverse, and interpretable pattern sets that guide reasoning along semantically coherent paths. By decoupling example selection from surface-level semantics and grounding it in latent reasoning structures, our method significantly reduces selection noise. Empirical evaluation across mathematical reasoning, commonsense reasoning, and other multi-step reasoning benchmarks demonstrates consistent performance gains, enhanced robustness, improved transparency, and greater controllability of CoT generation.

Technology Category

Application Category

📝 Abstract
Chain of Thought (CoT) prompting can encourage language models to engage in multi-step logical reasoning. The quality of the provided demonstrations significantly influences the success of downstream inference tasks. Current unsupervised CoT methods primarily select examples based on the semantics of the questions, which can introduce noise and lack interpretability. In this paper, we propose leveraging reasoning patterns to enhance CoT prompting effectiveness. Reasoning patterns represent the process by which language models arrive at their final results. By utilizing prior knowledge and prompt-based methods from large models, we first construct task-specific pattern sets. We then select diverse demonstrations based on different reasoning patterns. This approach not only mitigates the impact of noise but also provides explicit interpretability to help us understand the mechanisms of CoT. Extensive experiments demonstrate that our method is more robust and consistently leads to improvements across various reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance CoT prompting via reasoning patterns.
Improve interpretability and reduce noise in CoT.
Select diverse demonstrations using task-specific reasoning patterns.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages reasoning patterns for CoT prompting
Constructs task-specific reasoning pattern sets
Selects diverse demonstrations based on patterns
🔎 Similar Papers
No similar papers found.
Y
Yufeng Zhang
University of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences
X
Xuepeng Wang
Institute of Automation, Chinese Academy of Sciences
L
Lingxiang Wu
Institute of Automation, Chinese Academy of Sciences
J
Jinqiao Wang
Institute of Automation, Chinese Academy of Sciences