SEAL: Self-Evolving Agentic Learning for Conversational Question Answering over Knowledge Graphs

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
KBCQA faces challenges in multi-turn dialogues, including coreference resolution, contextual modeling, and complex logical reasoning; existing approaches suffer from substantial structural errors and high computational overhead. This paper proposes a two-stage semantic parsing framework: first, a large language model extracts the query’s core semantics, refined by a proxy-based calibration module; second, template completion and placeholder instantiation generate executable S-expression logical forms. A novel self-evolving mechanism is introduced, integrating local/global memory with a reflection module to enable zero-shot continual adaptation grounded in dialogue history and execution feedback. Evaluated on the SPICE benchmark, our method achieves state-of-the-art performance, significantly improving logical structure accuracy and inference efficiency—particularly for multi-hop reasoning, comparative, and aggregation tasks.

Technology Category

Application Category

📝 Abstract
Knowledge-based conversational question answering (KBCQA) confronts persistent challenges in resolving coreference, modeling contextual dependencies, and executing complex logical reasoning. Existing approaches, whether end-to-end semantic parsing or stepwise agent-based reasoning, often suffer from structural inaccuracies and prohibitive computational costs, particularly when processing intricate queries over large knowledge graphs. To address these limitations, we introduce SEAL, a novel two-stage semantic parsing framework grounded in self-evolving agentic learning. In the first stage, a large language model (LLM) extracts a minimal S-expression core that captures the essential semantics of the input query. This core is then refined by an agentic calibration module, which corrects syntactic inconsistencies and aligns entities and relations precisely with the underlying knowledge graph. The second stage employs template-based completion, guided by question-type prediction and placeholder instantiation, to construct a fully executable S-expression. This decomposition not only simplifies logical form generation but also significantly enhances structural fidelity and linking efficiency. Crucially, SEAL incorporates a self-evolving mechanism that integrates local and global memory with a reflection module, enabling continuous adaptation from dialog history and execution feedback without explicit retraining. Extensive experiments on the SPICE benchmark demonstrate that SEAL achieves state-of-the-art performance, especially in multi-hop reasoning, comparison, and aggregation tasks. The results validate notable gains in both structural accuracy and computational efficiency, underscoring the framework's capacity for robust and scalable conversational reasoning.
Problem

Research questions and friction points this paper is trying to address.

Resolves coreference and contextual dependencies in knowledge-based conversational QA
Reduces structural inaccuracies and computational costs in complex query processing
Enables continuous adaptation from dialog history without explicit retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage semantic parsing with self-evolving agentic learning
Agentic calibration refines core S-expression for structural accuracy
Self-evolving mechanism adapts from dialog history without retraining
🔎 Similar Papers
No similar papers found.
H
Hao Wang
Institute of Big Data and Artificial Intelligence, China Telecom Research Institute, Beijing, 102209, China
J
Jialun Zhong
Wangxuan Institute of Computer Technology, Peking University, Beijing, 100871, China
C
Changcheng Wang
Wangxuan Institute of Computer Technology, Peking University, Beijing, 100871, China
Z
Zhujun Nie
School of Artificial Intelligence, China University of Geosciences (Beijing), Beijing, 100083, China
Z
Zheng Li
Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, 519087, Guangdong, China
S
Shunyu Yao
Institute of Big Data and Artificial Intelligence, China Telecom Research Institute, Beijing, 102209, China
Yanzeng Li
Yanzeng Li
Beijing Normal University
X
Xinchi Li
Institute of Big Data and Artificial Intelligence, China Telecom Research Institute, Beijing, 102209, China