🤖 AI Summary
Subgraph matching is critical for applications such as knowledge graph question answering and molecular design, yet existing neural approaches only explore fragmented regions of the graph matching network design space. This paper presents the first systematic exploration of this unified design space, identifying three core architectural axes: inter-graph interaction mechanisms (attention vs. soft permutation), node/edge alignment strategies, and scoring network architectures. Based on these, we propose a composable and scalable neural graph matching framework and uncover synergistic gains arising from cross-dimensional design choices. Empirical evaluation across multiple subgraph matching benchmarks shows that our optimal configuration significantly outperforms state-of-the-art methods. Furthermore, we distill generalizable design principles—yielding transferable architectural insights and practical guidelines for neural graph matching. (136 words)
📝 Abstract
Subgraph matching is vital in knowledge graph (KG) question answering, molecule design, scene graph, code and circuit search, etc. Neural methods have shown promising results for subgraph matching. Our study of recent systems suggests refactoring them into a unified design space for graph matching networks. Existing methods occupy only a few isolated patches in this space, which remains largely uncharted. We undertake the first comprehensive exploration of this space, featuring such axes as attention-based vs. soft permutation-based interaction between query and corpus graphs, aligning nodes vs. edges, and the form of the final scoring network that integrates neural representations of the graphs. Our extensive experiments reveal that judicious and hitherto-unexplored combinations of choices in this space lead to large performance benefits. Beyond better performance, our study uncovers valuable insights and establishes general design principles for neural graph representation and interaction, which may be of wider interest.