🤖 AI Summary
This work proposes a modular differentiable neuro-symbolic reasoning framework to address the challenge of efficiently and accurately integrating neural models with knowledge graphs in knowledge-intensive multi-hop question answering. The approach innovatively combines soft unification-based symbolic reasoning, a neural path evaluator, and a value-guided Monte Carlo tree exploration mechanism to jointly optimize reasoning accuracy and computational efficiency. Evaluated on standard KGQA benchmarks, the model achieves high answer accuracy while substantially reducing the number of expensive graph queries and model invocations, demonstrating its effectiveness and practicality.
📝 Abstract
Large pretrained language models and neural reasoning systems have advanced many natural language tasks, yet they remain challenged by knowledge-intensive queries that require precise, structured multi-hop inference. Knowledge graphs provide a compact symbolic substrate for factual grounding, but integrating graph structure with neural models is nontrivial: naively embedding graph facts into prompts leads to inefficiency and fragility, while purely symbolic or search-heavy approaches can be costly in retrievals and lack gradient-based refinement. We introduce NeuroSymActive, a modular framework that combines a differentiable neural-symbolic reasoning layer with an active, value-guided exploration controller for Knowledge Graph Question Answering. The method couples soft-unification style symbolic modules with a neural path evaluator and a Monte-Carlo style exploration policy that prioritizes high-value path expansions. Empirical results on standard KGQA benchmarks show that NeuroSymActive attains strong answer accuracy while reducing the number of expensive graph lookups and model calls compared to common retrieval-augmented baselines.