Thinking in Many Modes: How Composite Reasoning Elevates Large Language Model Performance with Limited Data

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) rely on a single reasoning paradigm, limiting their ability to handle complex scientific and medical question-answering tasks that require synergistic multi-strategy reasoning—especially under low-resource conditions. To address this, we propose Composite Reasoning (CR), a novel framework that enables adaptive, dynamic integration of deductive, inductive, abductive, and causal reasoning *within* LLMs for the first time. CR employs a learnable prompting mechanism to orchestrate multimodal reasoning coordination, optimize reasoning-path selection, and allocate computational resources efficiently. Evaluated on multiple scientific and medical QA benchmarks, CR significantly outperforms chain-of-thought (CoT) and DeepSeek-R1–style reasoning (SR) in accuracy, sample efficiency, and token utilization efficacy. These results empirically validate that synergistic multi-paradigm reasoning enhances LLMs’ cognitive flexibility and reasoning depth.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), despite their remarkable capabilities, rely on singular, pre-dominant reasoning paradigms, hindering their performance on intricate problems that demand diverse cognitive strategies. To address this, we introduce Composite Reasoning (CR), a novel reasoning approach empowering LLMs to dynamically explore and combine multiple reasoning styles like deductive, inductive, and abductive for more nuanced problem-solving. Evaluated on scientific and medical question-answering benchmarks, our approach outperforms existing baselines like Chain-of-Thought (CoT) and also surpasses the accuracy of DeepSeek-R1 style reasoning (SR) capabilities, while demonstrating superior sample efficiency and adequate token usage. Notably, CR adaptively emphasizes domain-appropriate reasoning styles. It prioritizes abductive and deductive reasoning for medical question answering, but shifts to causal, deductive, and inductive methods for scientific reasoning. Our findings highlight that by cultivating internal reasoning style diversity, LLMs acquire more robust, adaptive, and efficient problem-solving abilities.
Problem

Research questions and friction points this paper is trying to address.

LLMs rely on singular reasoning paradigms limiting complex problem-solving
Composite Reasoning dynamically combines multiple cognitive strategies for nuanced solutions
Enhances LLM performance with limited data through adaptive reasoning styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Composite Reasoning combines multiple reasoning styles dynamically
Adaptively emphasizes domain-appropriate reasoning methods
Enhances sample efficiency and accuracy with limited data
🔎 Similar Papers
No similar papers found.