đ€ AI Summary
Large language models (LLMs) typically rely on fixed reasoning strategies, limiting their generalization across diverse logical reasoning tasks. Method: This paper systematically investigates whether prompt engineering can dynamically guide LLMs to adaptively switch among reasoning strategiesâincluding chain-of-thought, tree-of-thought, and self-consistency ensemblingâby proposing a multi-strategy prompting framework and a standardized logical reasoning benchmark. Contribution/Results: Experiments reveal that single-strategy prompting yields unstable performance, whereas adaptive strategy selectionâtriggered by task characteristics or model confidenceâsignificantly improves overall accuracy (average +4.2%) and enhances error recovery. This work provides the first empirical validation of prompt-driven controllability over reasoning strategies, establishing an interpretable, low-overhead paradigm for improving LLM reasoning flexibility and robustness.
đ Abstract
Human reasoning involves different strategies, each suited to specific problems. Prior work shows that large language model (LLMs) tend to favor a single reasoning strategy, potentially limiting their effectiveness in diverse reasoning challenges. In this work, we investigate whether prompting can control LLMs reasoning strategies and assess its impact on logical problem-solving. While our experiments show that no single strategy consistently improves accuracy, performance could be enhanced if models could adaptively choose the optimal strategy. We propose methods to guide LLMs in strategy selection, highlighting new ways to refine their reasoning abilities.