TypedThinker: Typed Thinking Improves Large Language Model Reasoning

๐Ÿ“… 2024-10-02
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current large language models (LLMs) exhibit limited reasoning diversity, over-relying on deductive reasoning and struggling with complex problems requiring inductive, abductive, or analogical strategies. To address this, we propose the first dynamic reasoning-type adaptation framework that explicitly models and selects reasoning types. Our method automatically classifies the required reasoning type based on problem characteristics, constructs type-aware prompts, retrieves and injects corresponding exemplars, and guides reasoning strategy via a lightweight adaptation moduleโ€”without model distillation. Evaluated on logical and mathematical reasoning benchmarks, our approach improves Mistral-7B, LLaMA3-8B, and Qwen2-7B by 3.4%, 6.5%, and 7.0%, respectively. Moreover, it is plug-and-play compatible, enhancing off-the-shelf systems including GPT-4o and MetaMath. This work advances reasoning flexibility in LLMs through explicit, adaptive reasoning-type modeling.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have demonstrated strong reasoning capabilities in solving complex problems. However, current approaches primarily enhance reasoning through the elaboration of thoughts while neglecting the diversity of reasoning types. LLMs typically employ deductive reasoning, proceeding step-by-step from given conditions, which limits their exploration during problem-solving. Our analysis reveals that certain problems are exclusively solvable through specific reasoning strategies like inductive, abductive, or analogical reasoning. However, incorporating diverse reasoning approaches presents two key challenges: identifying the appropriate reasoning type for each problem and exploiting this approach during problem-solving. Therefore, we propose the TypedThinker that predicts suitable reasoning types based on the problem and their previous effectiveness and provides relevant demonstrations to guide LLMs in applying these strategies. Experimental results show significant improvements across multiple benchmarks, with performance gains of 3.4% for Mistral 7B, 6.5% for LLaMA3 8B, and 7% for Qwen 2 7B on logical and mathematical reasoning tasks. TypedThinker enhances LLM reasoning without requiring knowledge distillation from larger models. It can be integrated into more advanced systems like GPT-4o or specialized models like MetaMath to diversify their reasoning approaches and improve their problem-solving capabilities.
Problem

Research questions and friction points this paper is trying to address.

Diversify reasoning types in LLMs beyond deductive approaches
Identify optimal reasoning strategies for specific problem types
Enhance LLM performance without knowledge distillation from larger models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts reasoning types based on problem analysis
Provides demonstrations to guide LLM reasoning
Enhances diverse reasoning without knowledge distillation
๐Ÿ”Ž Similar Papers
No similar papers found.