🤖 AI Summary
To address the propensity of large language models (LLMs) in knowledge base question answering (KBQA) to generate spurious queries and rely on rigid, context-agnostic templates, this paper reformulates the natural language-to-logical-form mapping as a multi-turn interactive decision process and introduces the first reinforcement learning framework guided by execution feedback. We propose Referenced Rejection Sampling (RRS), a data synthesis method that ensures reasoning trajectories strictly align with executable action sequences over the knowledge base, mitigating cold-start and hallucination issues. Integrated with Group Relative Policy Optimization (GRPO) and executable logical form generation, our approach enables robust multi-step knowledge graph navigation. Evaluated on WebQSP, GrailQA, and GraphQuestions, our method achieves state-of-the-art performance: logical form accuracy and execution verifiability significantly improve, hallucination rates decrease, and generalization to compositional and zero-shot reasoning is enhanced.
📝 Abstract
Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension of the environment. To address these limitations, we present extbf{KBQA-R1}, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback rather than static supervision. Furthermore, we introduce extbf{Referenced Rejection Sampling (RRS)}, a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences. Extensive experiments on WebQSP, GrailQA, and GraphQuestions demonstrate that KBQA-R1 achieves state-of-the-art performance, effectively grounding LLM reasoning in verifiable execution.