🤖 AI Summary
Existing large language models (LLMs) rely on a single reasoning paradigm, limiting their ability to handle complex scientific and medical question-answering tasks that require synergistic multi-strategy reasoning—especially under low-resource conditions. To address this, we propose Composite Reasoning (CR), a novel framework that enables adaptive, dynamic integration of deductive, inductive, abductive, and causal reasoning *within* LLMs for the first time. CR employs a learnable prompting mechanism to orchestrate multimodal reasoning coordination, optimize reasoning-path selection, and allocate computational resources efficiently. Evaluated on multiple scientific and medical QA benchmarks, CR significantly outperforms chain-of-thought (CoT) and DeepSeek-R1–style reasoning (SR) in accuracy, sample efficiency, and token utilization efficacy. These results empirically validate that synergistic multi-paradigm reasoning enhances LLMs’ cognitive flexibility and reasoning depth.
📝 Abstract
Large Language Models (LLMs), despite their remarkable capabilities, rely on singular, pre-dominant reasoning paradigms, hindering their performance on intricate problems that demand diverse cognitive strategies. To address this, we introduce Composite Reasoning (CR), a novel reasoning approach empowering LLMs to dynamically explore and combine multiple reasoning styles like deductive, inductive, and abductive for more nuanced problem-solving. Evaluated on scientific and medical question-answering benchmarks, our approach outperforms existing baselines like Chain-of-Thought (CoT) and also surpasses the accuracy of DeepSeek-R1 style reasoning (SR) capabilities, while demonstrating superior sample efficiency and adequate token usage. Notably, CR adaptively emphasizes domain-appropriate reasoning styles. It prioritizes abductive and deductive reasoning for medical question answering, but shifts to causal, deductive, and inductive methods for scientific reasoning. Our findings highlight that by cultivating internal reasoning style diversity, LLMs acquire more robust, adaptive, and efficient problem-solving abilities.