π€ AI Summary
This study investigates the out-of-distribution generalization of Transformer models to unseen first-order logic (FOL) entailment reasoning, using knowledge graph query answering as the evaluation benchmark to characterize the relationship between distributional shift and logical generalization. We identify, for the first time, an intrinsic inconsistency between standard positional encoding and logical reasoning requirements. To address this, we propose TEGAβa logic-aware Transformer architecture integrating syntax-sensitive input encoding, query-structure embeddings, and structured positional modeling. As an end-to-end framework, TEGA significantly improves fine-grained logical generalization, outperforming specialized logical reasoners on multi-source, compositional benchmarks. Comprehensive ablations systematically validate the critical roles of query syntactic modeling, token-level semantic alignment, and architectural logical consistency in enabling robust generalization. Our work establishes a novel, interpretable, and scalable paradigm for neural-symbolic reasoning.
π Abstract
Transformers, as a fundamental deep learning architecture, have demonstrated remarkable capabilities in reasoning. This paper investigates the generalizable first-order logical reasoning ability of transformers with their parameterized knowledge and explores ways to improve it. The first-order reasoning capability of transformers is assessed through their ability to perform first-order logical entailment, which is quantitatively measured by their performance in answering knowledge graph queries. We establish connections between (1) two types of distribution shifts studied in out-of-distribution generalization and (2) the unseen knowledge and query settings discussed in the task of knowledge graph query answering, enabling a characterization of fine-grained generalizability. Results on our comprehensive dataset show that transformers outperform previous methods specifically designed for this task and provide detailed empirical evidence on the impact of input query syntax, token embedding, and transformer architectures on the reasoning capability of transformers. Interestingly, our findings reveal a mismatch between positional encoding and other design choices in transformer architectures employed in prior practices. This discovery motivates us to propose a more sophisticated, logic-aware architecture, TEGA, to enhance the capability for generalizable first-order logical entailment in transformers.