Gradually Excavating External Knowledge for Implicit Complex Question Answering

📅 2026-03-09
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Large language models often struggle with open-domain implicit complex reasoning due to limited knowledge coverage, poor temporal relevance, and insufficient reasoning caused by one-shot generation. To address these limitations, this work proposes a progressive external knowledge mining framework that dynamically selects between knowledge retrieval and logical reasoning at each iterative step through an adaptive action selection mechanism, thereby effectively integrating external knowledge with multi-step reasoning. Evaluated on the StrategyQA benchmark, the proposed method achieves an accuracy of 78.17% with less than 6% of the parameter count of competing models, establishing a new state-of-the-art performance among models of the 10B scale.

Technology Category

Application Category

📝 Abstract
Recently, large language models (LLMs) have gained much attention for the emergence of human-comparable capabilities and huge potential. However, for open-domain implicit question-answering problems, LLMs may not be the ultimate solution due to the reasons of: 1) uncovered or out-of-date domain knowledge, 2) one-shot generation and hence restricted comprehensiveness. To this end, this work proposes a gradual knowledge excavation framework for open-domain complex question answering, where LLMs iteratively and actively acquire external information, and then reason based on acquired historical knowledge. Specifically, during each step of the solving process, the model selects an action to execute, such as querying external knowledge or performing a single logical reasoning step, to gradually progress toward a final answer. Our method can effectively leverage plug-and-play external knowledge and dynamically adjust the strategy for solving complex questions. Evaluated on the StrategyQA dataset, our method achieves 78.17% accuracy with less than 6% parameters of its competitors, setting new SOTA for ~10B-scale LLMs.
Problem

Research questions and friction points this paper is trying to address.

implicit question answering
open-domain QA
external knowledge
complex reasoning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradual knowledge excavation
external knowledge integration
iterative reasoning
complex question answering
plug-and-play knowledge
🔎 Similar Papers
No similar papers found.