🤖 AI Summary
Answering complex first-order logical queries involving projection, intersection, union, and negation over incomplete knowledge graphs is highly challenging. This work proposes ROG, a novel framework that integrates query-aware neighborhood retrieval with chain-of-thought reasoning from large language models. ROG recursively decomposes multi-operator queries into single-operator subqueries and leverages compact, relevant neighborhood evidence at each reasoning step. By introducing a retrieval-augmented, stepwise reasoning mechanism along with caching and reuse of intermediate answer sets, ROG significantly enhances reasoning consistency and robustness—particularly for high-complexity and negation-containing queries. Experimental results demonstrate that ROG consistently outperforms strong embedding-based baselines across standard benchmarks.
📝 Abstract
Answering first-order logic (FOL) queries over incomplete knowledge graphs (KGs) is difficult, especially for complex query structures that compose projection, intersection, union, and negation. We propose ROG, a retrieval-augmented framework that combines query-aware neighborhood retrieval with large language model (LLM) chain-of-thought reasoning. ROG decomposes a multi-operator query into a sequence of single-operator sub-queries and grounds each step in compact, query-relevant neighborhood evidence. Intermediate answer sets are cached and reused across steps, improving consistency on deep reasoning chains. This design reduces compounding errors and yields more robust inference on complex and negation-heavy queries. Overall, ROG provides a practical alternative to embedding-based logical reasoning by replacing learned operators with retrieval-grounded, step-wise inference. Experiments on standard KG reasoning benchmarks show consistent gains over strong embedding-based baselines, with the largest improvements on high-complexity and negation-heavy query types.