🤖 AI Summary
Large language models (LLMs) frequently generate hallucinations in knowledge-intensive tasks due to ungrounded, non-structured reasoning. Method: This paper proposes Logical-Augmented Generation (LAG), a novel paradigm that replaces flat semantic matching—typical in retrieval-augmented generation (RAG)—with a structured, Cartesian-inspired reasoning framework. LAG comprises three core components: logical subproblem ordering, dependency-guided retrieval, and logical termination, enabling closed-loop problem decomposition, dependency-aware inference, and result aggregation. Contribution/Results: By explicitly modeling logical dependencies, LAG significantly mitigates error propagation, enhances reasoning interpretability, and improves alignment with human cognitive patterns. Experiments across four benchmark datasets demonstrate that LAG substantially reduces hallucination rates and markedly improves both robustness and accuracy in complex, multi-step reasoning—validating the critical importance of logic-structured modeling for knowledge-intensive generative tasks.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks, yet exhibit critical limitations in knowledge-intensive tasks, often generating hallucinations when faced with questions requiring specialized expertise. While retrieval-augmented generation (RAG) mitigates this by integrating external knowledge, it struggles with complex reasoning scenarios due to its reliance on direct semantic retrieval and lack of structured logical organization. Inspired by Cartesian principles from extit{Discours de la méthode}, this paper introduces Logic-Augmented Generation (LAG), a novel paradigm that reframes knowledge augmentation through systematic question decomposition and dependency-aware reasoning. Specifically, LAG first decomposes complex questions into atomic sub-questions ordered by logical dependencies. It then resolves these sequentially, using prior answers to guide context retrieval for subsequent sub-questions, ensuring stepwise grounding in logical chain. To prevent error propagation, LAG incorporates a logical termination mechanism that halts inference upon encountering unanswerable sub-questions and reduces wasted computation on excessive reasoning. Finally, it synthesizes all sub-resolutions to generate verified responses. Experiments on four benchmark datasets demonstrate that LAG significantly enhances reasoning robustness, reduces hallucination, and aligns LLM problem-solving with human cognition, offering a principled alternative to existing RAG systems.