🤖 AI Summary
DropEdge alleviates overfitting in Graph Neural Networks (GNNs) via random edge dropout, yet delivers limited performance gains in supervised learning. This work theoretically identifies the root cause: inherent degree bias and structural imbalance in GNN aggregation mechanisms, which undermine robustness to edge perturbations. To address this, we propose Aggregation Buffer (AB)—a lightweight, learnable, plug-and-play module that introduces buffered aggregation parameters without altering network architecture or training procedures. AB jointly models neighbor importance and structural bias, and is compatible with mainstream GNNs (e.g., GCN, GAT). Extensive experiments on benchmark datasets demonstrate consistent improvements in node classification accuracy and significantly enhanced robustness against edge perturbations and structural noise. Code and data are publicly available.
📝 Abstract
We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.