🤖 AI Summary
Existing graph neural networks (GNNs) employ uniform, fixed-depth message passing, overlooking heterogeneity across nodes in structural roles, local neighborhoods, and task-specific requirements—thereby limiting expressive power. This work first empirically demonstrates significant variation in the optimal propagation depth at the node level. Building on this insight, we propose a general, scalable depth-adaptive framework that dynamically selects the number of message-passing steps per node via a learnable layer-decision mechanism. Our approach is model-agnostic, seamlessly integrating with any message-passing-based GNN without modifying underlying operators, and supports end-to-end training. Evaluated on node classification across multiple benchmark datasets, it consistently outperforms state-of-the-art baselines, demonstrating superior effectiveness, generalizability, and computational efficiency.
📝 Abstract
Graph Neural Networks (GNNs) have proven to be highly effective in various graph learning tasks. A key characteristic of GNNs is their use of a fixed number of message-passing steps for all nodes in the graph, regardless of each node's diverse computational needs and characteristics. Through empirical real-world data analysis, we demonstrate that the optimal number of message-passing layers varies for nodes with different characteristics. This finding is further supported by experiments conducted on synthetic datasets. To address this, we propose Adaptive Depth Message Passing GNN (ADMP-GNN), a novel framework that dynamically adjusts the number of message passing layers for each node, resulting in improved performance. This approach applies to any model that follows the message passing scheme. We evaluate ADMP-GNN on the node classification task and observe performance improvements over baseline GNN models.