🤖 AI Summary
Existing knowledge graph question answering (KGQA) methods suffer from poor adaptability of static path retrieval and high computational cost plus inaccurate evaluation in dynamic path generation. To address these issues, this paper proposes a dynamic reasoning framework integrating symbolic search with adaptive evaluation. Its core contributions are: (1) an LLM-guided Monte Carlo Tree Search (MCTS) for controllable, interpretable multi-hop path exploration; (2) a context-aware lightweight Transformer cross-attention scoring model to enhance path discrimination accuracy; and (3) a pseudo-path self-optimization mechanism that generates high-quality training signals at low computational cost. Evaluated on multiple KGQA benchmarks, the method substantially outperforms state-of-the-art approaches, achieving superior balance between inference efficiency and answer accuracy. Extensive experiments validate the framework’s generalizability across diverse KGQA settings and its capacity for continual improvement—demonstrating both robustness and evolvability.
📝 Abstract
Knowledge Graph Question Answering (KGQA) aims to interpret natural language queries and perform structured reasoning over knowledge graphs by leveraging their relational and semantic structures to retrieve accurate answers. Recent KGQA methods primarily follow either retrieve-then-reason paradigm, relying on GNNs or heuristic rules for static paths extraction, or dynamic path generation strategies that use large language models (LLMs) with prompting to jointly perform retrieval and reasoning. However, the former suffers from limited adaptability due to static path extraction and lack of contextual refinement, while the latter incurs high computational costs and struggles with accurate path evaluation due to reliance on fixed scoring functions and extensive LLM calls. To address these issues, this paper proposes Dynamically Adaptive MCTS-based Reasoning (DAMR), a novel framework that integrates symbolic search with adaptive path evaluation for efficient and context-aware KGQA. DAMR employs a Monte Carlo Tree Search (MCTS) backbone guided by an LLM-based planner, which selects top-$k$ relevant relations at each step to reduce search space. To improve path evaluation accuracy, we introduce a lightweight Transformer-based scorer that performs context-aware plausibility estimation by jointly encoding the question and relation sequence through cross-attention, enabling the model to capture fine-grained semantic shifts during multi-hop reasoning. Furthermore, to alleviate the scarcity of high-quality supervision, DAMR incorporates a dynamic pseudo-path refinement mechanism that periodically generates training signals from partial paths explored during search, allowing the scorer to continuously adapt to the evolving distribution of reasoning trajectories. Extensive experiments on multiple KGQA benchmarks show that DAMR significantly outperforms state-of-the-art methods.