π€ AI Summary
This work addresses the limited synergy between knowledge base completion (KBC) and knowledge base question answering (KBQA), as well as the underutilization of large language modelsβ (LLMsβ) reasoning capabilities in existing approaches. The authors propose JCQL, a novel framework that jointly optimizes KBC and KBQA for the first time. JCQL leverages a small language model (SLM) to enhance the accuracy of LLM-generated reasoning paths in KBQA and, in turn, uses these refined reasoning paths to incrementally fine-tune the SLM, thereby improving KBC performance. This bidirectional task enhancement not only mitigates LLM hallucinations and reduces computational overhead but also achieves state-of-the-art results on both tasks across two public benchmarks.
π Abstract
Knowledge Bases (KBs) play a key role in various applications. As two representative KB-related tasks, knowledge base completion (KBC) and knowledge base question answering (KBQA) are closely related and inherently complementary with each other. Thus, it will be beneficial to solve the task of joint KBC and KBQA to make them reinforce each other. However, existing studies usually rely on the small language model (SLM) to enhance them jointly, and the large language model (LLM)'s strong reasoning ability is ignored. In this paper, by combining the strengths of the LLM with the SLM, we propose a novel framework JCQL, which can make these two tasks enhance each other in an iterative manner. To make KBC enhance KBQA, we augment the LLM agent-based KBQA model's reasoning paths by incorporating an SLM-trained KBC model as an action of the agent, alleviating the LLM's hallucination and high computational costs issue in KBQA. To make KBQA enhance KBC, we incrementally fine-tune the KBC model by leveraging KBQA's reasoning paths as its supplementary training data, improving the ability of the SLM in KBC. Extensive experiments over two public benchmark data sets demonstrate that JCQL surpasses all baselines for both KBC and KBQA tasks.