🤖 AI Summary
Fixed-layer designs in graph neural networks (GNNs) limit node classification performance and robustness. To address this, we propose the first node-level personalized layer selection paradigm: dynamically selecting the optimal GNN depth for each node to achieve precise prediction. Our core method, MetSelect1, integrates metric learning with class-prototype embedding, jointly modeling normalized inter-layer variance and prototype distances to adaptively determine the optimal layer count per node. Evaluated on 10 benchmark datasets and across three mainstream GNN architectures, MetSelect1 consistently improves classification accuracy, enables training of deeper GNNs, and enhances robustness against graph data poisoning attacks. This work breaks the conventional “uniform-depth” design assumption—where all nodes share identical layer counts—and establishes a new direction for fine-grained graph representation learning through node-adaptive architectural customization.
📝 Abstract
Graph Neural Networks (GNNs) combine node attributes over a fixed granularity of the local graph structure around a node to predict its label. However, different nodes may relate to a node-level property with a different granularity of its local neighborhood, and using the same level of smoothing for all nodes can be detrimental to their classification. In this work, we challenge the common fact that a single GNN layer can classify all nodes of a graph by training GNNs with a distinct personalized layer for each node. Inspired by metric learning, we propose a novel algorithm, MetSelect1, to select the optimal representation layer to classify each node. In particular, we identify a prototype representation of each class in a transformed GNN layer and then, classify using the layer where the distance is smallest to a class prototype after normalizing with that layer's variance. Results on 10 datasets and 3 different GNNs show that we significantly improve the node classification accuracy of GNNs in a plug-and-play manner. We also find that using variable layers for prediction enables GNNs to be deeper and more robust to poisoning attacks. We hope this work can inspire future works to learn more adaptive and personalized graph representations.