From Superficial to Deep: Integrating External Knowledge for Follow-up Question Generation Using Knowledge Graph and LLM

📅 2025-04-08
🏛️ International Conference on Computational Linguistics
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing dialogue systems generate follow-up questions that primarily rely on superficial contextual cues, lacking deep exploratory reasoning and falling significantly short of human-level question-asking capability. Method: We propose a three-stage knowledge-enhanced question generation framework: (1) precise identification of the dialogue context’s core topic; (2) online construction of a lightweight, dynamic knowledge graph (KG) to model semantic relationships in real time; and (3) synergistic integration of KG-structured knowledge with large language models’ (LLMs) commonsense reasoning via prompt engineering to generate high-information, cognitively sophisticated exploratory questions. Contribution/Results: This work introduces the first end-to-end question generation framework that jointly leverages dynamic KG construction and LLMs. Extensive evaluations—both human assessments and automated metrics (e.g., Q-BLEU, DepthScore)—demonstrate substantial improvements over baselines: generated questions exhibit significantly greater depth and informational value while maintaining strong contextual relevance.

Technology Category

Application Category

📝 Abstract
In a conversational system, dynamically generating follow-up questions based on context can help users explore information and provide a better user experience. Humans are usually able to ask questions that involve some general life knowledge and demonstrate higher order cognitive skills. However, the questions generated by existing methods are often limited to shallow contextual questions that are uninspiring and have a large gap to the human level. In this paper, we propose a three-stage external knowledge-enhanced follow-up question generation method, which generates questions by identifying contextual topics, constructing a knowledge graph (KG) online, and finally combining these with a large language model to generate the final question. The model generates information-rich and exploratory follow-up questions by introducing external common sense knowledge and performing a knowledge fusion operation. Experiments show that compared to baseline models, our method generates questions that are more informative and closer to human questioning levels while maintaining contextual relevance.
Problem

Research questions and friction points this paper is trying to address.

Generating deep follow-up questions in conversational systems
Enhancing questions with external knowledge and knowledge graphs
Bridging the gap between AI and human-level questioning skills
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates knowledge graph for dynamic QG
Combines LLM with external knowledge fusion
Generates deep, exploratory follow-up questions