Reverse Thinking Makes LLMs Stronger Reasoners

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
How can large language models (LLMs) be enhanced in reverse reasoning—critical for commonsense, mathematical, and logical inference? This paper proposes RevThink, the first learnable backward-reasoning framework tailored for LLMs. It introduces a forward–backward structured data augmentation scheme and multi-task joint training—encompassing forward reasoning generation, backward question construction, and backward reasoning generation—to establish a forward/backward co-verification mechanism. Leveraging a teacher–student architecture with knowledge distillation, RevThink achieves superior performance using only 10% of labeled data compared to standard fine-tuning with tenfold data. On 12 reasoning benchmarks, it improves zero-shot accuracy by an average of 13.53%, outperforming strong distillation baselines by +6.84%. Moreover, it significantly enhances out-of-distribution generalization and sample efficiency.

Technology Category

Application Category

📝 Abstract
Reverse thinking plays a crucial role in human reasoning. Humans can reason not only from a problem to a solution but also in reverse, i.e., start from the solution and reason towards the problem. This often enhances overall reasoning performance as it enables consistency checks between their forward and backward thinking. To enable Large Language Models (LLMs) to perform reverse thinking, we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data augmentation and learning objectives. In RevThink, we augment the dataset by collecting structured forward-backward reasoning from a teacher model, consisting of: (1) the original question, (2) forward reasoning, (3) backward question, and (4) backward reasoning. We then employ three objectives to train a smaller student model in a multi-task learning fashion: (a) generate forward reasoning from a question, (b) generate a backward question from a question, and (c) generate backward reasoning from the backward question. Experiments across 12 datasets covering commonsense, math, and logical reasoning show an average 13.53% improvement over the student model's zero-shot performance and a 6.84% improvement over the strongest knowledge distillation baselines. Moreover, our method demonstrates sample efficiency -- using only 10% of the correct forward reasoning from the training data, it outperforms a standard fine-tuning method trained on 10x more forward reasoning. RevThink also exhibits strong generalization to out-of-distribution held-out datasets.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs reasoning with reverse thinking.
Improve reasoning performance via forward-backward consistency.
Achieve sample efficiency and strong generalization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Reverse-Enhanced Thinking (RevThink) framework
Augments data with forward-backward reasoning pairs
Trains student model with multi-task learning objectives
🔎 Similar Papers
No similar papers found.