KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the propensity of large language models (LLMs) in knowledge base question answering (KBQA) to generate spurious queries and rely on rigid, context-agnostic templates, this paper reformulates the natural language-to-logical-form mapping as a multi-turn interactive decision process and introduces the first reinforcement learning framework guided by execution feedback. We propose Referenced Rejection Sampling (RRS), a data synthesis method that ensures reasoning trajectories strictly align with executable action sequences over the knowledge base, mitigating cold-start and hallucination issues. Integrated with Group Relative Policy Optimization (GRPO) and executable logical form generation, our approach enables robust multi-step knowledge graph navigation. Evaluated on WebQSP, GrailQA, and GraphQuestions, our method achieves state-of-the-art performance: logical form accuracy and execution verifiability significantly improve, hallucination rates decrease, and generalization to compositional and zero-shot reasoning is enhanced.

Technology Category

Application Category

📝 Abstract
Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension of the environment. To address these limitations, we present extbf{KBQA-R1}, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback rather than static supervision. Furthermore, we introduce extbf{Referenced Rejection Sampling (RRS)}, a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences. Extensive experiments on WebQSP, GrailQA, and GraphQuestions demonstrate that KBQA-R1 achieves state-of-the-art performance, effectively grounding LLM reasoning in verifiable execution.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucinated queries and rigid reasoning in KBQA
Shifts from text imitation to interaction optimization via RL
Resolves cold-start by aligning reasoning with ground-truth actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning optimizes KBQA via interaction feedback
Group Relative Policy Optimization refines strategies with execution results
Referenced Rejection Sampling aligns reasoning with ground-truth actions
🔎 Similar Papers
No similar papers found.
X
Xin Sun
Institute of Automation, Chinese Academy of Sciences
Z
Zhongqi Chen
Ant Group
Xing Zheng
Xing Zheng
Ph.D. of University of California, Riverside
Sensor fusionSLAMVIO
Q
Qiang Liu
Institute of Automation, Chinese Academy of Sciences
S
Shu Wu
Institute of Automation, Chinese Academy of Sciences
B
Bowen Song
Ant Group
Zilei Wang
Zilei Wang
University of Science and Technology of China
Computer VisionDeep LearningPattern Recognition
W
Weiqiang Wang
Ant Group
L
Liang Wang
Institute of Automation, Chinese Academy of Sciences