Leveraging Classical Algorithms for Graph Neural Networks

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) exhibit limited generalization—particularly under out-of-distribution (OOD) conditions—in molecular property prediction. To address this, we propose an algorithm-guided pretraining paradigm: leveraging execution traces of 24 classical graph algorithms from the CLRS benchmark as structured priors, we explicitly inject algorithmic logic into GNNs via layer-wise initialization and parameter freezing, thereby endowing them with verifiable inductive biases. This work is the first to utilize algorithmic execution traces as supervisory signals for GNN pretraining. Evaluated on the Open Graph Benchmark, our method achieves substantial OOD generalization gains on ogbg-molhiv (HIV inhibition prediction) and ogbg-molclintox (clinical toxicity prediction), improving absolute performance by 6% and 3%, respectively—consistently surpassing randomly initialized baselines across all settings.

Technology Category

Application Category

📝 Abstract
Neural networks excel at processing unstructured data but often fail to generalise out-of-distribution, whereas classical algorithms guarantee correctness but lack flexibility. We explore whether pretraining Graph Neural Networks (GNNs) on classical algorithms can improve their performance on molecular property prediction tasks from the Open Graph Benchmark: ogbg-molhiv (HIV inhibition) and ogbg-molclintox (clinical toxicity). GNNs trained on 24 classical algorithms from the CLRS Algorithmic Reasoning Benchmark are used to initialise and freeze selected layers of a second GNN for molecular prediction. Compared to a randomly initialised baseline, the pretrained models achieve consistent wins or ties, with the Segments Intersect algorithm pretraining yielding a 6% absolute gain on ogbg-molhiv and Dijkstra pretraining achieving a 3% gain on ogbg-molclintox. These results demonstrate embedding classical algorithmic priors into GNNs provides useful inductive biases, boosting performance on complex, real-world graph data.
Problem

Research questions and friction points this paper is trying to address.

Improving GNN generalization via classical algorithm pretraining
Enhancing molecular property prediction with algorithmic priors
Bridging neural flexibility with classical correctness guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretraining GNNs on classical algorithms improves molecular prediction
Freezing pretrained layers enhances generalization on graph benchmarks
Embedding algorithmic priors provides inductive biases for GNNs
🔎 Similar Papers
No similar papers found.