🤖 AI Summary
Graph Neural Networks (GNNs) exhibit limited generalization in graph-level tasks—particularly under noise, long-tailed class distributions, and few-shot settings. To address this, we propose the first unified evaluation framework for graph-level classification and regression, enabling systematic assessment of GNN generalization. Our method introduces two key innovations: (1) a structural modeling approach based on *k*-path rooted subgraph encoding, which significantly enhances subgraph counting capability and representational expressivity; and (2) a cross-domain graph contrastive learning mechanism integrated with adaptive edge pruning, improving model robustness against distribution shifts and perturbations. Extensive experiments across 27 graph datasets and 14 state-of-the-art baseline models demonstrate that our approach achieves superior generalization performance under noisy, long-tailed, and few-shot conditions—outperforming existing methods by substantial margins.
📝 Abstract
Graphs are essential data structures for modeling complex interactions in domains such as social networks, molecular structures, and biological systems. Graph-level tasks, which predict properties or classes for the entire graph, are critical for applications, such as molecular property prediction and subgraph counting. Graph Neural Networks (GNNs) have shown promise in these tasks, but their evaluations are often limited to narrow datasets, tasks, and inconsistent experimental setups, restricting their generalizability. To address these limitations, we propose a unified evaluation framework for graph-level GNNs. This framework provides a standardized setting to evaluate GNNs across diverse datasets, various graph tasks (e.g., graph classification and regression), and challenging scenarios, including noisy, imbalanced, and few-shot graphs. Additionally, we propose a novel GNN model with enhanced expressivity and generalization capabilities. Specifically, we enhance the expressivity of GNNs through a $k$-path rooted subgraph approach, enabling the model to effectively count subgraphs (e.g., paths and cycles). Moreover, we introduce a unified graph contrastive learning algorithm for graphs across diverse domains, which adaptively removes unimportant edges to augment graphs, thereby significantly improving generalization performance. Extensive experiments demonstrate that our model achieves superior performance against fourteen effective baselines across twenty-seven graph datasets, establishing it as a robust and generalizable model for graph-level tasks.