KGCompiler: Deep Learning Compilation Optimization for Knowledge Graph Complex Logical Query Answering

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the escalating inference latency and memory overhead in Complex Logical Query Answering (CLQA) over knowledge graphs—particularly as the number of first-order logic (FOL) operators increases—this paper introduces KGCompiler, the first deep learning compiler tailored for CLQA. Methodologically, KGCompiler establishes a knowledge-graph–specific compilation and optimization paradigm, enabling cross-algorithm, non-intrusive, end-to-end optimization. It integrates KG semantics–aware graph-structure scheduling with an FOL operator–level intermediate representation (IR), facilitating tensor fusion and memory reuse. Evaluated on multiple CLQA benchmarks, KGCompiler achieves 1.04×–8.26× speedup (average 3.71×) over baseline systems, significantly reduces GPU memory consumption, and provides a plug-and-play API for seamless integration.

Technology Category

Application Category

📝 Abstract
Complex Logical Query Answering (CLQA) involves intricate multi-hop logical reasoning over large-scale and potentially incomplete Knowledge Graphs (KGs). Although existing CLQA algorithms achieve high accuracy in answering such queries, their reasoning time and memory usage scale significantly with the number of First-Order Logic (FOL) operators involved, creating serious challenges for practical deployment. In addition, current research primarily focuses on algorithm-level optimizations for CLQA tasks, often overlooking compiler-level optimizations, which can offer greater generality and scalability. To address these limitations, we introduce a Knowledge Graph Compiler, namely KGCompiler, the first deep learning compiler specifically designed for CLQA tasks. By incorporating KG-specific optimizations proposed in this paper, KGCompiler enhances the reasoning performance of CLQA algorithms without requiring additional manual modifications to their implementations. At the same time, it significantly reduces memory usage. Extensive experiments demonstrate that KGCompiler accelerates CLQA algorithms by factors ranging from 1.04x to 8.26x, with an average speedup of 3.71x. We also provide an interface to enable hands-on experience with KGCompiler.
Problem

Research questions and friction points this paper is trying to address.

Optimizes reasoning time and memory usage for CLQA.
Introduces KGCompiler for deep learning compilation optimization.
Enhances CLQA performance without manual algorithm modifications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning compiler for Knowledge Graphs
Optimizes CLQA without manual implementation changes
Reduces memory usage and accelerates reasoning
🔎 Similar Papers
No similar papers found.
H
Hongyu Lin
University of Chinese Academy of Sciences, Institute of Software Chinese Academy of Sciences
Haoran Luo
Haoran Luo
Nanyang Technological University
Knowledge GraphLarge Language ModelsGraph Neural Networks
H
Hanghang Cao
University of Chinese Academy of Sciences, Institute of Software Chinese Academy of Sciences
Y
Yang Liu
University of Chinese Academy of Sciences, Institute of Software Chinese Academy of Sciences
S
Shihao Gao
University of Chinese Academy of Sciences, Institute of Software Chinese Academy of Sciences
Kaichun Yao
Kaichun Yao
Institute of Software, Chinese Academy of Sciences
Natural Language ProcessingData Mining
L
Libo Zhang
Institute of Software Chinese Academy of Sciences
M
Mingjie Xing
Institute of Software Chinese Academy of Sciences
Yanjun Wu
Yanjun Wu
Institute of Software, Chinese Academy of Sciences
Computer Science