Fundamental Reasoning Paradigms Induce Out-of-Domain Generalization in Language Models

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the three fundamental reasoning paradigms—deductive, inductive, and abductive inference—enhance the out-of-domain generalization capabilities of large language models. To isolate reasoning from world knowledge, the authors construct a symbolic task trajectory dataset and systematically evaluate the individual and synergistic contributions of these reasoning mechanisms to cross-domain generalization. They further propose a reasoning-enhancement strategy that integrates fine-tuning, increased model depth, and a transition from dense architectures to Mixture-of-Experts (MoE). Experiments on fully natural-language, real-world out-of-domain tasks demonstrate significant performance improvements, with gains up to 14.60 points, thereby providing the first systematic evidence of the critical role foundational reasoning paradigms play in large model generalization.

Technology Category

Application Category

📝 Abstract
Deduction, induction, and abduction are fundamental reasoning paradigms, core for human logical thinking. Although improving Large Language Model (LLM) reasoning has attracted significant research efforts, the extent to which the fundamental paradigms induce generalization has yet to be systematically explored. In this study, we shed light on how the interplay between these core paradigms influences LLMs'reasoning behavior. To this end, we first collect a new dataset of reasoning trajectories from symbolic tasks, each targeting one of the three fundamental paradigms, to abstract from concrete world knowledge. Then, we investigate effective ways for inducing these skills into LLMs. We experiment with a battery of methods including simple fine-tuning, and more complex approaches to increase model depth, or transform a dense model to a mixture-of-experts. We comprehensively evaluate induced models on realistic out-of-domain tasks, that are entirely formulated in natural language and contain real-world knowledge. Our results reveal that our approach yields strong generalizability with substantial performance gains (up to $14.60$) across realistic tasks.
Problem

Research questions and friction points this paper is trying to address.

reasoning paradigms
out-of-domain generalization
large language models
deduction
induction
Innovation

Methods, ideas, or system contributions that make the work stand out.

fundamental reasoning paradigms
out-of-domain generalization
large language models
reasoning trajectories
mixture-of-experts
🔎 Similar Papers
No similar papers found.