🤖 AI Summary
In the era of large language models (LLMs), it remains unclear whether knowledge extraction from semi-structured content retains value for question answering (QA). Method: This paper systematically investigates the feasibility and efficacy of integrating knowledge triple extraction with LLMs. We extend existing benchmarks with fine-grained triple annotations and propose a unified framework that jointly leverages knowledge extraction, context augmentation, and multi-task learning. Experiments are conducted across multiple commercial and open-source LLMs at varying scales. Contribution/Results: While large-scale triple extraction remains challenging, the extracted symbolic knowledge consistently improves LLM-based QA performance—especially under low-resource conditions. Our findings empirically validate the enduring complementary value of symbolic knowledge and neural models, offering a lightweight, efficient pathway for knowledge-enhanced LLMs.
📝 Abstract
The advent of Large Language Models (LLMs) has significantly advanced web-based Question Answering (QA) systems over semi-structured content, raising questions about the continued utility of knowledge extraction for question answering. This paper investigates the value of triple extraction in this new paradigm by extending an existing benchmark with knowledge extraction annotations and evaluating commercial and open-source LLMs of varying sizes. Our results show that web-scale knowledge extraction remains a challenging task for LLMs. Despite achieving high QA accuracy, LLMs can still benefit from knowledge extraction, through augmentation with extracted triples and multi-task learning. These findings provide insights into the evolving role of knowledge triple extraction in web-based QA and highlight strategies for maximizing LLM effectiveness across different model sizes and resource settings.