Enhancing Transformers for Generalizable First-Order Logical Entailment

πŸ“… 2025-01-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the out-of-distribution generalization of Transformer models to unseen first-order logic (FOL) entailment reasoning, using knowledge graph query answering as the evaluation benchmark to characterize the relationship between distributional shift and logical generalization. We identify, for the first time, an intrinsic inconsistency between standard positional encoding and logical reasoning requirements. To address this, we propose TEGAβ€”a logic-aware Transformer architecture integrating syntax-sensitive input encoding, query-structure embeddings, and structured positional modeling. As an end-to-end framework, TEGA significantly improves fine-grained logical generalization, outperforming specialized logical reasoners on multi-source, compositional benchmarks. Comprehensive ablations systematically validate the critical roles of query syntactic modeling, token-level semantic alignment, and architectural logical consistency in enabling robust generalization. Our work establishes a novel, interpretable, and scalable paradigm for neural-symbolic reasoning.

Technology Category

Application Category

πŸ“ Abstract
Transformers, as a fundamental deep learning architecture, have demonstrated remarkable capabilities in reasoning. This paper investigates the generalizable first-order logical reasoning ability of transformers with their parameterized knowledge and explores ways to improve it. The first-order reasoning capability of transformers is assessed through their ability to perform first-order logical entailment, which is quantitatively measured by their performance in answering knowledge graph queries. We establish connections between (1) two types of distribution shifts studied in out-of-distribution generalization and (2) the unseen knowledge and query settings discussed in the task of knowledge graph query answering, enabling a characterization of fine-grained generalizability. Results on our comprehensive dataset show that transformers outperform previous methods specifically designed for this task and provide detailed empirical evidence on the impact of input query syntax, token embedding, and transformer architectures on the reasoning capability of transformers. Interestingly, our findings reveal a mismatch between positional encoding and other design choices in transformer architectures employed in prior practices. This discovery motivates us to propose a more sophisticated, logic-aware architecture, TEGA, to enhance the capability for generalizable first-order logical entailment in transformers.
Problem

Research questions and friction points this paper is trying to address.

Transformer Models
Logical Reasoning
First-Order Logic
Innovation

Methods, ideas, or system contributions that make the work stand out.

TEGA
Transformer Enhancement
Knowledge Graph Reasoning
πŸ”Ž Similar Papers
T
Tianshi ZHENG
Department of Computer Science and Engineering, HKUST, Hong Kong SAR, China
J
JiaZheng Wang
Department of Computer Science and Engineering, Beihang University, Beijing, China
Z
Zihao Wang
Department of Computer Science and Engineering, HKUST, Hong Kong SAR, China
Jiaxin Bai
Jiaxin Bai
Hong Kong University of Science and Technology
Natual Language Processing
H
Hang Yin
Department of Mathematical Sciences, Tsinghua University, Beijing, China
Zheye Deng
Zheye Deng
HKUST
Large Language ModelsText-to-StructureAgent Reinforcement Learning
Yangqiu Song
Yangqiu Song
HKUST
Artificial IntelligenceData MiningNatural Language ProcessingKnowledge GraphsCommonsense Reasoning
J
Jianxin Li
Department of Computer Science and Engineering, Beihang University, Beijing, China