🤖 AI Summary
Large language models (LLMs) suffer from severe knowledge hallucination, limited higher-order reasoning capability, and parameter constraints in domain-specific intelligence. To address these challenges, this paper proposes a novel “knowledge-reasoning decoupling” paradigm: externalizing domain knowledge as retrievable resources while internalizing domain-specific reasoning patterns via cognition-guided instruction tuning. Methodologically, we introduce the first reasoning modeling framework grounded in Bloom’s Taxonomy, integrating retrieval-augmented training (via RAG-style prompt injection), external knowledge base integration, and efficient adaptation of a lightweight LLM (Llama-3.1-8B). Experimental results demonstrate that our approach outperforms both retrieval-augmented GPT-4 and distilled Deepseek-R1 across multiple domain tasks, validating the state-of-the-art (SOTA) advantages of the small-model-plus-external-knowledge architecture in terms of accuracy, inference efficiency, and scalability.
📝 Abstract
Domain-specific intelligence demands specialized knowledge and sophisticated reasoning for problem-solving, posing significant challenges for large language models (LLMs) that struggle with knowledge hallucination and inadequate reasoning capabilities under constrained parameter budgets. Inspired by Bloom's Taxonomy in educational theory, we propose Retrieval-Augmented Reasoning Modeling (RARE), a novel paradigm that decouples knowledge storage from reasoning optimization. RARE externalizes domain knowledge to retrievable sources and internalizes domain-specific reasoning patterns during training. Specifically, by injecting retrieved knowledge into training prompts, RARE transforms learning objectives from rote memorization to contextualized reasoning application. It enables models to bypass parameter-intensive memorization and prioritize the development of higher-order cognitive processes. Our experiments demonstrate that lightweight RARE-trained models (e.g., Llama-3.1-8B) could achieve state-of-the-art performance, surpassing retrieval-augmented GPT-4 and Deepseek-R1 distilled counterparts. RARE establishes a paradigm shift where maintainable external knowledge bases synergize with compact, reasoning-optimized models, collectively driving more scalable domain-specific intelligence. Repo: https://github.com/Open-DataFlow/RARE