Approximate Subgraph Matching with Neural Graph Representations and Reinforcement Learning

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Approximate subgraph matching (ASM) is an NP-hard problem that determines whether a query graph approximately exists within a large-scale target graph. This work proposes a novel approach based on a branch-and-bound framework, uniquely integrating graph Transformers with reinforcement learning. The method leverages graph Transformers to capture global structural information and employs imitation learning for pretraining followed by Proximal Policy Optimization (PPO) fine-tuning to refine the matching strategy, aiming to maximize long-term matching rewards. Evaluated on both synthetic and real-world datasets, the proposed approach significantly outperforms existing state-of-the-art methods, achieving leading performance in both matching accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Approximate subgraph matching (ASM) is a task that determines the approximate presence of a given query graph in a large target graph. Being an NP-hard problem, ASM is critical in graph analysis with a myriad of applications ranging from database systems and network science to biochemistry and privacy. Existing techniques often employ heuristic search strategies, which cannot fully utilize the graph information, leading to sub-optimal solutions. This paper proposes a Reinforcement Learning based Approximate Subgraph Matching (RL-ASM) algorithm that exploits graph transformers to effectively extract graph representations and RL-based policies for ASM. Our model is built upon the branch-and-bound algorithm that selects one pair of nodes from the two input graphs at a time for potential matches. Instead of using heuristics, we exploit a Graph Transformer architecture to extract feature representations that encode the full graph information. To enhance the training of the RL policy, we use supervised signals to guide our agent in an imitation learning stage. Subsequently, the policy is fine-tuned with the Proximal Policy Optimization (PPO) that optimizes the accumulative long-term rewards over episodes. Extensive experiments on both synthetic and real-world datasets demonstrate that our RL-ASM outperforms existing methods in terms of effectiveness and efficiency. Our source code is available at https://github.com/KaiyangLi1992/RL-ASM.
Problem

Research questions and friction points this paper is trying to address.

Approximate Subgraph Matching
Graph Analysis
NP-hard Problem
Query Graph
Target Graph
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximate Subgraph Matching
Graph Transformer
Reinforcement Learning
Imitation Learning
Branch-and-Bound
🔎 Similar Papers
No similar papers found.
Kaiyang Li
Kaiyang Li
University of Connecticut
Parameter-Efficient Fine-TuningGraph Neural Network
S
Shihao Ji
University of Connecticut, 352 Mansfield Road, Storrs, CT 06269, USA
Zhipeng Cai
Zhipeng Cai
Professor, IEEE Fellow, DMACM, Georgia State University
Internet of ThingsPrivacyAlgorithmBig DataNetworking
W
Wei Li
Georgia State University, 33 Gilmer Street SE, Atlanta, GA 30303, USA