Counting Still Counts: Understanding Neural Complex Query Answering Through Query Relaxation

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether neural Complex Query Answering (CQA) models possess implicit reasoning capabilities beyond explicit graph structure, or merely emulate symbolic query relaxation. To this end, we propose a training-free baseline grounded in path counting and constraint relaxation, and conduct systematic evaluation of state-of-the-art neural CQA models across multiple knowledge graph benchmarks and diverse complex query patterns. Results show that mainstream neural models do not consistently outperform this baseline; their reasoning behaviors exhibit significant complementarity. Integrating neural modeling with query relaxation yields stable accuracy improvements. Our core contributions are threefold: (i) clarifying the actual reasoning boundaries of neural CQA, (ii) establishing training-free relaxation as a critical, interpretable baseline, and (iii) empirically validating the effectiveness and generalizability of hybrid neural-symbolic paradigms for complex query answering.

Technology Category

Application Category

📝 Abstract
Neural methods for Complex Query Answering (CQA) over knowledge graphs (KGs) are widely believed to learn patterns that generalize beyond explicit graph structure, allowing them to infer answers that are unreachable through symbolic query processing. In this work, we critically examine this assumption through a systematic analysis comparing neural CQA models with an alternative, training-free query relaxation strategy that retrieves possible answers by relaxing query constraints and counting resulting paths. Across multiple datasets and query structures, we find several cases where neural and relaxation-based approaches perform similarly, with no neural model consistently outperforming the latter. Moreover, a similarity analysis reveals that their retrieved answers exhibit little overlap, and that combining their outputs consistently improves performance. These results call for a re-evaluation of progress in neural query answering: despite their complexity, current models fail to subsume the reasoning patterns captured by query relaxation. Our findings highlight the importance of stronger non-neural baselines and suggest that future neural approaches could benefit from incorporating principles of query relaxation.
Problem

Research questions and friction points this paper is trying to address.

Examining neural models' generalization beyond explicit graph structure
Comparing neural CQA with training-free query relaxation strategies
Re-evaluating progress in neural query answering through combined approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural models compared with training-free query relaxation
Relaxation retrieves answers by counting paths after constraint relaxation
Combining neural and relaxation outputs improves performance consistently
🔎 Similar Papers
No similar papers found.