RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG paradigms neglect the cognitive application of knowledge in task-specific reasoning, leading to a misalignment between retrieved facts and actual inference requirements. To address this, we propose an application-aware RAG framework that explicitly embeds task-oriented reasoning into the retrieval-generation pipeline. Our approach introduces a bilingual corpus—comprising a knowledge base and task-aligned application examples—supporting both manual and automated construction. We design a modular architecture featuring dual-path joint retrieval, example-alignment modeling, and multi-domain prompt-driven LLM collaborative reasoning. This is the first work to achieve end-to-end integration from retrieval outputs to structured, goal-directed reasoning. Extensive evaluation across mathematics, law, and healthcare domains—and across multiple LLMs—demonstrates average accuracy gains of 3–5%, reaching up to 7.5% in complex scenarios, while significantly improving interpretability and task adaptability.

Technology Category

Application Category

📝 Abstract
The integration of external knowledge through Retrieval-Augmented Generation (RAG) has become foundational in enhancing large language models (LLMs) for knowledge-intensive tasks. However, existing RAG paradigms often overlook the cognitive step of applying knowledge, leaving a gap between retrieved facts and task-specific reasoning. In this work, we introduce RAG+, a principled and modular extension that explicitly incorporates application-aware reasoning into the RAG pipeline. RAG+ constructs a dual corpus consisting of knowledge and aligned application examples, created either manually or automatically, and retrieves both jointly during inference. This design enables LLMs not only to access relevant information but also to apply it within structured, goal-oriented reasoning processes. Experiments across mathematical, legal, and medical domains, conducted on multiple models, demonstrate that RAG+ consistently outperforms standard RAG variants, achieving average improvements of 3-5%, and peak gains up to 7.5% in complex scenarios. By bridging retrieval with actionable application, RAG+ advances a more cognitively grounded framework for knowledge integration, representing a step toward more interpretable and capable LLMs.
Problem

Research questions and friction points this paper is trying to address.

Bridging gap between retrieved facts and task-specific reasoning
Enhancing knowledge application in Retrieval-Augmented Generation (RAG)
Improving LLM performance in complex, knowledge-intensive domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates application-aware reasoning into RAG
Constructs dual corpus with knowledge and examples
Enables structured goal-oriented reasoning processes
🔎 Similar Papers
No similar papers found.
Y
Yu Wang
Huawei Technologies Ltd.
Shiwan Zhao
Shiwan Zhao
Independent Researcher, Research Scientist of IBM Research - China (2000-2020)
AGILarge Language ModelNLPSpeechRecommeder System
M
Ming Fan
Xi’an Jiaotong University
Z
Zhihu Wang
Huawei Technologies Ltd.
Y
Yubo Zhang
Huawei Technologies Ltd.
Xicheng Zhang
Xicheng Zhang
Xi’an Jiaotong University
Z
Zhengfan Wang
Xi’an Jiaotong University
Heyuan Huang
Heyuan Huang
Johns Hopkins University
Natural Language ProcessingMedical InformaticsMachine LearningMental Health
T
Ting Liu
Xi’an Jiaotong University